Powerful New Supercomputer Analyzes Earthquakes

PASADENA, Calif.- One of the most powerful computer clusters in the academic world has been created at the California Institute of Technology in order to unlock the mysteries of earthquakes.

The Division of Geological and Planetary Sciences' new Geosciences Computational Facility will feature a 2,048-processor supercomputer, housed in the basement of the Seeley G. Mudd Building of Geophysics and Planetary Science on campus.

Computer hardware fills long rows of black racks in the facility, each contains about 35 compute nodes. Massive air conditioning units line an entire wall of the 20-by-80-foot room to re-circulate and chill the air. Miles of optical-fiber cables tie the processors together into a working cluster that went online in September.

The $5.8 million parallel computing project was made possible by gifts from Dell, Myricom, Intel, and the National Science Foundation.

According to Jeroen Tromp, McMillan Professor of Geophysics and director of the Institute's Seismology Lab, who spearheaded the project, "The other crucial ingredient was Caltech's investment in the infrastructure necessary to house the new machine," he says. Some 500 kilowatts of power and 90 tons of air conditioning are needed to operate and cool the hardware.

David Kewley, the project's systems administrator, explained that's enough kilowatts to power 350 average households.

Tromp's research group will share use of the cluster with other division professors and their research groups, while a job-scheduling system will make sure the facility runs at maximum possible capacity. Tromp, who came to Caltech in 2000 from Harvard, is known as one of the world's leading theoretical seismologists. Until now, he and his Institute colleagues have used a smaller version of the machine, popularly known as a Beowulf cluster. Helping revolutionize the field of earthquake study, Tromp has created 3-D simulations of seismic events. He and former Caltech postdoctoral scholar Dimitri Komatitsch designed a computer model that divides the earth into millions of elements. Each element can be divided into slices that represent the earth's geological features.

In simulations involving tens of millions of operations per second, the seismic waves are propagated from one slice to the next, as they speed up, slow down, and change direction according to the earth's characteristics. The model is analogous to a CAT scan of the earth, allowing scientists to track seismic wave paths. "Much like a medical doctor uses a CAT scan to make an image of the brain, seismologists use earthquake-generated waves to image the earth's interior," Tromp says, adding that the earthquake's location, origin time, and characteristics must also be determined.

Tromp will now be able to deliver better, more accurate models in less time. "We hope to use the new machine to do much more detailed mapping. In addition to improving the resolution of our images of the earth's interior, we will also quantitatively assess the devastating effects associated with earthquakes based upon numerical simulations of strong ground motion generated by hypothetical earthquakes."

"One novel way in which we are planning to use the new machine is for near real-time seismology," Tromp adds. "Every time an earthquake over magnitude 3.5 occurs anywhere in California we will routinely simulate the motions associated with the event. Scientific products that result from these simulations are 'synthetic' seismograms that can be compared to actual seismograms."

The "real" seismograms are recorded by the Southern California Seismic Network (SCSN), operated by the Seismo Lab in conjunction with the U.S. Geological Survey. Of interest to the general public, Tromp expects that the collaboration will produce synthetic ShakeMovies of recent quakes, and synthetic ShakeMaps which can be compared to real ShakeMaps derived from the data. "These products should be available within an hour after the earthquake," he says. The Seismology Lab Media Center will be renovated with a large video wall on which scientists can show the results of simulations and analysis.

The new generation of seismic knowledge may also help scientists, engineers, and others lessen the potentially catastrophic effects of earthquakes.

"Intel is proud to be a sponsor of this premier system for seismic research which will be used by researchers and scientists," said Les Karr, Intel Corporate Business Development Manager. "The project reflects Caltech's growing commitment, in both research and teaching, to a broadening range of problems in computational geoscience. It is also a reflection of the growing use of commercial, commodity computing systems to solve some of the world's toughest problems."

The Dell equipment consists of 1,024 dual Dell PowerEdge 1850 servers that were pre-assembled for easy implementation. Dell Services representatives came to campus to complete the installation.

"CITerra, as this new research tool is known on the TOP500 Supercomputer list, is a proud accomplishment both for Caltech and for Myricom," said Charles Seitz, founder and CEO of Myricom, and a former professor of computer science at Caltech. "The talented technical team of Myricom about half of whom are Caltech alumni/ae, are eager for people to know that the architecture, programming methods, and technology of cluster computing was pioneered at Caltech 20 years ago. Those of us at Myricom who have drawn so much inspiration from our Caltech years are delighted to give some of the results of our efforts back to Caltech."

About Myricom: Founded in 1994, Myricom, Inc. created Myrinet, the high-performance computing (HPC) interconnect technology used in thousands of computing clusters in more than 50 countries worldwide. With its next-generation Myri-10G solutions, Myricom is bridging the gap between the rigorous demands of traditional HPC applications and the growing need for affordable computing speed in mainstream enterprises. Privately held, Myricom achieved and has sustained profitability since 1995 with 42 consecutive profitable quarters through September 2005. Based in Arcadia, California, Myricom solutions are sold direct and through channels. Myrinet clusters are supplied by OEM computer companies including IBM, HP, Dell, and Sun, and by other leading cluster integrators worldwide.

About Intel: Intel, the world's largest chipmaker, is also a leading manufacturer of computer, networking, and communications products. Intel processors, platform architectures, interconnects, networking technology, software tools, and services power some of the fastest computers in the world at price points that have expanded high performance computing beyond the confines of elite supercomputer centers and into the broad community of customers in mainstream industries. Those industries span automotive, aerospace, electronics manufacturing, energy and oil and gas in addition to scientific, research and academic organizations.

About the National Science Foundation: The NSF is an independent federal agency created by Congress in 1950 "to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense..." With an annual budget of about $5.5 billion, it is the funding source for approximately 20 percent of all federally supported basic research conducted by America's colleges and universities. In many fields such as mathematics, computer science, and the social sciences, NSF is the major source of federal backing.


Contact: Jill Perry (626) 395-3226 jperry@caltech.edu Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media


Modified Mice Test Alzheimer's Disease Drugs

PASADENA, Calif.- Alzheimer's disease is a progressive brain disorder that afflicts an estimated 4.5 million Americans and that is characterized by the presence of dense clumps of a small peptide called amyloid-beta in the spaces between neurons.

Developing therapeutic drugs to stop the formation of the lesions, called amyloid plaques, and to remove them from the brain has become the focus of intense research efforts by pharmaceutical companies. Unfortunately, methods to test the efficacy of the drugs are limited as is the access to test results given to outside researchers.

Now neuroscientist Joanna L. Jankowsky, a senior research fellow in the laboratory of Henry A. Lester, Bren Professor of Biology at the California Institute of Technology, in collaboration with David R. Borchelt at the University of Florida, Gainesville, and colleagues at Johns Hopkins School of Medicine, Mayo Clinic Jacksonville, and the National Cancer Institute, have created a strain of genetically engineered mice that offers an unprecedented opportunity to test these new drugs and provides striking insight into possible future treatment for the disease.

A paper about the mouse model was published November 15 in the international open-access medical journal PLoS Medicine (www.plosmedicine.org).

The amyloid-beta peptide is something of an enigma. It is known to be produced normally in the brain and to be churned out in excess in Alzheimer's disease. But researchers don't know exactly what purpose the molecule usually serves--or, indeed, what happens to dramatically raise its concentration in the Alzheimer's brain.

The peptide is created when a molecule called amyloid precursor protein (APP) is snipped in two places, at the front end by an enzyme called beta-APP cleaving enzyme, and at the back end by an enzyme called gamma-secretase. If either of those two cuts is blocked, the amyloid-beta protein won't be released--and plaque won't build up in the brain.

To prevent plaques from accumulating, drug companies have been experimenting with compounds that inhibit one or the other of the enzymes, thereby blocking the release of amyloid-beta. Jankowsky and her colleagues decided to test how well this approach to treating Alzheimer's disease will work. Because they lacked access to the drugs themselves, they instead engineered a laboratory mouse with two added genes that would mimic the effect of secretase inhibitor treatment. One gene triggered the continuous production of APP in the brain (and thus also the amyloid-beta peptide) leading to substantial plaque deposits in mice as young as six months old. The second gene served as an off-switch to amyloid-beta. The researchers were able to flip the switch at will by adding the antibiotic tetracycline into the mice's food--and when they did so, they also halted all plaque formation.

"The key point here is that we've completely arrested the progression of the pathology," says Jankowsky.

Plaque deposits that had already formed, however, weren't cleared out.

"We can stop the disease from getting worse in these mice, but we can't reverse it," says study co-author David Borchelt, Jankowsky's former postdoctoral research advisor at Johns Hopkins University. "Although it is possible that human brains repair damage better than mouse brains, the study suggests that it may be difficult to repair lesions once they have formed."

One implication of the research is that it suggests that treatment with drugs to stop plaque formation should begin as soon as possible after the disease is diagnosed. "It looks like early intervention would be the most effective way of treating disease," Jankowsky says.

"It was surprising to many people that the plaques didn't go away, but they are really very stable structures," says Jankowsky. It is also possible, some researchers believe, that the plaques themselves aren't damaging. Rather, they may be a sign of the overproduction of amyloid-beta and of the small, free-floating clumps of the peptide that actually cause cognitive problems. "The plaques may simply act as trash cans for what has already been produced," she says. If that is indeed the case, Jankowsky says, then "shutting down the production of amyloid-beta itself would be adequate to reverse cognitive decline."

On the other hand, removal of the plaques could improve cognitive function by allowing neurons that had previously been displaced by the protein deposits to reform and form new neural connections. That is why, the researchers say, an ideal therapy would be one that both prevented the overproduction of new amyloid-beta and cleared out existing deposits.

Drug companies are currently investigating treatment protocols for Alzheimer's disease in which antibodies against the amyloid-beta peptide are directly injected into the body. The antibodies latch onto the molecule and quickly clear it from the brain, along with any plaque deposits that have already formed. However, Jankowsky says, these drugs therapies may not be appropriate for long-term use because of possible side effects. One clinical trial of the antibodies had to be stopped because some patients developed a serious brain inflammation known as encephalitis.

"The upshot of this research is that a combination of approaches may be the best way to tackle Alzheimer's disease," Jankowsky says. "The idea would be to use immunotherapy to acutely reverse the damage, followed by chronic secretase inhibition to prevent it from ever recurring."

For a copy of the paper, go to http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1...


Contact: Dr. Joanna L. Jankowsky (626) 395-6884 jlj2@caltech.edu

Kathy Svitil (626) 395-8022 ksvitil@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media


Researchers uncover new details about how signals are transmitted in the brain

PASADENA, Calif.—An international team of scientists has announced a new breakthrough in understanding the molecular details of how signals move around in the human brain. The work is basic research, but could help pharmacologists design new drugs for treating a host of neurological disorders, as well as drugs for reducing alcohol and nicotine craving.

Reporting in the November 11 issue of the journal Nature, researchers from the California Institute of Technology and the University of Cambridge explain how they have learned to force a protein known as the 5-HT3 receptor to change its function by chemically changing the shape of one of the amino acids from which it is built. Using a technique developed at Caltech known as "unnatural amino mutagenesis," the researchers altered a proline amino acid in the 5-HT3 protein in order to modulate the receptor's ion channel. This gave the researchers control of the "switch" that is involved in neuron signaling.

According to Dennis Dougherty, lead author of the paper and the Hoag Professor of Chemistry at Caltech, the new research solves a 50-year-old mystery of how a neuroreceptor is changed by a chemical signal. Scientists have long known that signaling in the brain is a chemical process, in which a chemical substance known as a neurotransmitter is released into the synapse of a nerve and binds to a neuroreceptor, which is a protein that is found in the surface membranes of neurons. The action of the neurotransmitter changes the neuroreceptor in such a way that a signal is transmitted, but the precise nature of the structural change was unknown until now.

"The key is that we've identified the switch that has to get thrown when the neuroreceptor sends a signal," Dougherty says. "This switch is a proline."

The 5-HT3 receptor is one of a group of molecular structures in the brain cells that are known as Cys-loop receptors, which are associated with Parkinson's disease, schizophrenia, and learning and attention deficit disorders, as well as alcoholism and nicotine addiction. For treatments of some of these conditions, pharmacologists already custom-design drugs that have a general effect on the Cys-loop receptors. But the hope is that better design at the molecular level will lead to much better treatments that address more precisely the underlying signaling problems.

Dougherty says the work required the collaboration of organic chemists, molecular biologists, electrophysiologists and computer modelers. His Caltech group worked closely with the research group of Caltech biologist Henry Lester, and with the group at Cambridge headed by Sarah Lummis, to establish how proline changes its structure to open an ion channel and launch a neuron signal.

"This is the most precise model of receptor signaling yet developed, and it provides valuable insights into the nature of neuroreceptors and the drugs that modulate them," Dougherty says.

"The promise for pharmacology is that precise control of the signaling could lead to new ways of dealing with receptors that are malfunctioning," says Lester, Caltech's Bren Professor of Biology. "The fundamental understanding of how this all works is of value to people who want to manipulate the signaling."

The 5-HT3 receptor is also involved in the enjoyment people derive from drinking alcohol. If the 5-HT3 receptors are blocked, then alcoholics no longer get as much pleasure from drinking. Therefore, better control of the signaling mechanism could lead to more potent drug interventions for alcoholics. The nicotine receptors are also related, so progress could also lead to better ways of reducing the craving for nicotine.

In addition to Dougherty, Lester, and Lummis, the other authors of the paper are Caltech graduate students Darren Beene (now graduated) and Lori Lee, and Cambridge researcher William Broadhurst.

The research is supported by the National Institute of Neurological Disorders and Stroke.

Robert Tindol

North Atlantic Corals Could Lead to Better Understanding of the Nature of Climate Change

PASADENA, Calif.—The deep-sea corals of the North Atlantic are now recognized as "archives" of Earth's climatic past. Not only are they sensitive to changes in the mineral content of the water during their 100-year lifetimes, but they can also be dated very accurately.

In a new paper appearing in Science Express, the online publication of the American Association for the Advancement of Science (AAAS), environmental scientists describe their recent advances in "reading" the climatic history of the planet by looking at the radiocarbon of deep-sea corals known as Desmophyllum dianthus.

According to lead author Laura Robinson, a postdoctoral scholar at the California Institute of Technology, the work shows in principle that coral analysis could help solve some outstanding puzzles about the climate. In particular, environmental scientists would like to know why Earth's temperature has been holding so steadily for the last 10,000 years or so, after having previously been so variable.

"These corals are a new archive of climate, just like ice cores and tree rings are archives of climate," says Robinson, who works in the Caltech lab of Jess Adkins, assistant professor of geochemistry and global environmental science, and also an author of the paper.

"One of the significant things about this study is the sheer number of corals we now have to work with," says Adkins, "We've now collected 3,700 corals in the North Atlantic, and have been able to study about 150 so far in detail. Of these, about 25 samples were used in the present study.

"To put this in perspective, I wrote my doctoral dissertation with two dozen corals available," Adkins adds.

The corals that are needed to tell Earth's climatic story are typically found at depths of a few hundred to thousands of meters. Scuba divers, by contrast, can go only about 50 to 75 meters below the surface. Besides, the water is bitter cold and the seas are choppy. And to add an additional complication, the corals can be hard to find.

The solution has been for the researchers to take out a submarine to harvest the coral. The star of the ventures so far has been the deep-submergence vehicle known as Alvin, which is famed for having discovered the Titanic some years back. In a 2003 expedition several hundred miles off the coast of New England, Alvin brought back the aforementioned 3,700 corals from the New England Seamounts.

The D. dianthus is especially useful because it lives a long time, can be dated very precisely through uranium dating, and also shows the variations in carbon-14 (or radiocarbon) due to changing ocean currents. The carbon-14 all originally came from the atmosphere and decays at a precisely known rate, whether it is found in the water itself or in the skeleton of a coral. The less carbon-14 found, the "older" the water. This means that the carbon-14 age of the coral would be "older" than the uranium age of the coral. The larger the age difference, the older the water that bathed the coral in the past.

In a perfectly tame and orderly environment, the deepest water would be the most depleted of carbon-14 because the waters at that depth would have allowed the element the most time to decay. A sampling of carbon-14 content at various depths, therefore, would allow a graph to be constructed, in which the maximum carbon-14 content would be found at the surface.

In the real world, however, the oceans circulate. As a result, an "older" mass of water can actually sit on top of a "younger" mass. What's more, the ways the ocean water circulate are tied to climatic variations. A more realistic graph plotting carbon-14 content against depth would thus be rather wavy, with steeper curves meaning a faster rate of new water flushing in, and flatter curves corresponding to relatively unperturbed water.

The researchers can get this information by cutting up the individual corals and measuring their carbon-14 content. During the animals' 100-year life spans, they take in minerals from the water and use the minerals to build their skeletons. The calcium carbonate fossil we see, then, is a skeleton of an animal that may have just died or may have lived thousands of years ago. But in any case, the skeleton is a 100-year record of how much carbon-14 was washing over the creature's body during its lifetime.

An individual coral can tell a story of the water it lived in because the amount of variation in different parts of the growing skeleton is an indication of the kind of water that was present. If a coral sample shows a big increase in carbon-14 about midway through life, then one can assume that a mass of younger water suddenly bathed the coral. On the other hand, if a huge decrease of carbon-14 is observed, then an older water mass must have suddenly moved in.

A coral with no change in the amount of carbon-14 observed in its skeleton means that things were pretty steady during its 100-year lifetime, but the story may be different for a coral at a different depth, or one that lived at a different time.

In sum, the corals tell how the waters were circulating, which in turn is profoundly linked to climatic change, Adkins explains.

"The last 10,000 years have been relatively warm and stable-perhaps because of the overturning of the deep ocean," he says. "The deep ocean has nearly all the carbon, nearly all the heat, and nearly all the mass of the climate system, so how these giant masses of water have sloshed back and forth is thought to be tied to the period of the glacial cycles."

Details of glaciation can be studied in other ways, but getting a history of water currents is a lot more tricky, Adkins adds. But if the ocean currents themselves are implicated in climatic change, then knowing precisely how the rules work would be a great advancement in the knowledge of our planet.

"These guys provide us with a powerful new way of looking into Earth's climate," he says. "They give us a new way to investigate how the rate of ocean overturning has changed in the past."

Robinson says that the current collection of corals all come from the North Atlantic. Future plans call for an expedition to the area southeast of the southern tip of South America to collect corals. The addition of the second collection would give a more comprehensive picture of the global history of ocean overturning, she says.

In addition to Robinson and Adkins, the other authors of the paper are Lloyd Keigwin of the Woods Hole Oceanographic Institute; John Southon of the University of California at Irvine; Diego Fernandez and Shin-Ling Wang of Caltech; and Dan Scheirer of the U.S. Geological Survey office at Menlo Park.

The Science Express article will be published in a future issue of the journal Science.

Robert Tindol

Geologists Uncover New Evidence About the Rise of Oxygen

PASADENA, Calif.—Scientists believe that oxygen first showed up in the atmosphere about 2.7 billion years ago. They think it was put there by a one-celled organism called "cyanobacteria," which had recently become the first living thing on Earth to make oxygen from water and sunlight.

The rock record provides a good bit of evidence that this is so. But one of these rocks has just gotten a great deal more slippery, so to speak.

In an article appearing in the Geological Society of America's journal Geology, investigators from the California Institute of Technology, the University of Tübingen in Germany, and the University of Alberta describe their new findings about the origin of the mineral deposits known as banded-iron formations, or "BIFs." A rather attractive mineral that is often cut and polished for paperweights and other decorative items, a BIF typically has alternating bands of iron oxide and silica. How the iron got into the BIFs to begin with is thought to be a key to knowing when molecular oxygen first was produced on Earth.

The researchers show that purple bacteria—primitive organisms that have thrived on Earth without producing oxygen since before cyanobacteria first evolved—could also have laid down the iron oxide deposits that make up BIFs. Further, the research shows that the newer cyanobacteria, which suddenly evolved the ability to make oxygen through photosynthesis, could have even been floating around when the purple bacteria were making the iron oxides in the BIFs.

"The question is what made the BIFs," says Dianne Newman, who is associate professor of geobiology and environmental science and engineering at Caltech and an investigator with the Howard Hughes Medical Institute. "BIFs are thought to record the history of the rise of oxygen on Earth, but this may not be true for all of them."

The classical view of how the BIFs were made is that cyanobacteria began putting oxygen in the atmosphere about 2.7 billion years ago. At the same time, hydrothermal sources beneath the ocean floors caused ferrous iron (that is, "nonrusted" iron) to rise in the water. This iron then reacted with the new oxygen in the atmosphere, which caused the iron to change into ferric iron. In other words, the iron literally "rusted" at the surface of the ocean waters, and then ultimately settled on the ocean floor as sediments of hematite (Fe2O3) and magnetite (Fe3O4).

The problem with this scenario was that scientists in Germany about 10 years ago discovered a way that the more ancient purple bacteria could oxidize iron without oxygen. Instead, these anaerobic bacteria could have used a photosynthetic process in which light and carbon dioxide are used to turn the ferrous iron into ferric iron, throwing the mechanism of BIF formation into question.

Newman's postdoctoral researcher Andreas Kappler (now an assistant professor at the University of Tübingen) expanded on this discovery by doing some lab experiments to measure the rate at which purple bacteria could form ferric iron under light conditions relevant for different depths within the ocean.

Kappler's results showed that iron could indeed have been oxidized by these bacteria, in amounts matching what would have been necessary to form one of the Precambrian iron deposits in Australia.

Another of the paper's Caltech authors, Claudia Pasquero, determined the thickness of the purple bacterial layer that would have been needed for complete iron oxidation. Her results showed that the thickness of the bacterial layer could have been on the order of 17 meters, below wave base, which compares favorably to what is seen today in stratified water bodies such as the Black Sea.

Also, the results show that, in principle, the purple bacteria could have oxidized all the iron seen in the BIFs, even if the cyanobacteria had been present in overlying waters.

However, Newman says that the rock record contains various other kinds of evidence that oxygen was indeed absent in the atmosphere earlier than 2.7 billion years ago. Therefore, the goal of better understanding the history of the rise of oxygen could come down to finding out if there are subtle differences between BIFs that could have been produced by cyanobacteria and/or purple bacteria. And to do this, it's best to look at the biology of the organisms.

"The hope is that we'll be able to find out whether some organic compound is absolutely necessary for anaerobic anoxygenic photosynthesis to occur," Newman says. "If we can know how they work in detail, then maybe we'll be fortunate enough to find one molecule really necessary."

A good candidate is an organic molecule with high geological preservation potential that would have existed in the purple bacteria three billion years ago and still exists today. If the Newman team could find such a molecule that is definitely involved in the changing of iron to iron oxide, and is not present in cyanobacteria, then some of the enigmas of oxygen on the ancient earth would be solved.

"The goals are to get at the types of biomolecules essential for different types of photosynthesis-hopefully, one that is preservable," Newman says.

"I guess one interesting thing from our findings is that you can get rust without oxygen, but this is also about the history of metabolic evolution, and the ability to use ancient rock to investigate the history of life."

Better understanding microbial metabolism could also be of use in NASA's ambitious goal of looking for life on other worlds. The question of which organisms made the BIFs on Earth, therefore, could be useful for astrobiologists who may someday find evidence in rock records elsewhere.

Robert Tindol

Cracks or Cryovolcanoes? Surface Geology Creates Clouds on Titan

PASADENA, Calif.-Like the little engine that could, geologic activity on the surface of Saturn's moon Titan-maybe outgassing cracks and perhaps icy cryovolcanoes-is belching puffs of methane gas into the atmosphere of the moon, creating clouds.

This is the conclusion of planetary astronomer Henry G. Roe, a postdoctoral researcher, and Michael E. Brown, professor of planetary astronomy at the California Institute of Technology. Roe, Brown, and their colleagues at Caltech and the Gemini Observatory in Hawaii based their analysis on new images of distinctive clouds that sporadically appear in the middle latitudes of the moon's southern hemisphere. The research will appear in the October 21 issue of the journal Science.

The clouds provide the first explanation for a long-standing Titan mystery: From where does the atmosphere's copious methane gas keep coming? That methane is continuously destroyed by the sun's ultraviolet rays, in a process called photolysis. This photolysis forms the thick blanket of haze enveloping the moon, and should have removed all of Titan's atmospheric methane billions of years ago.

Clearly, something is replenishing the gas-and that something, say Roe and his colleagues, is geologic activity on the surface. "This is the first strong evidence for currently active methane release from the surface," Roe says.

Adds Brown: "For a long time we've wondered why there is methane in the atmosphere of Titan at all, and the answer is that it spews out of the surface. And what is tremendously exciting is that we can see it, from Earth; we see these big clouds coming from above these methane vents, or methane volcanoes. Everyone had thought that must have been the answer, but until now, no one had found the spewing gun."

Roe, Brown, and their colleagues made the discovery using images obtained during the past two years by adaptive optics systems on the 10-meter telescope at the W. M. Keck Observatory on Mauna Kea in Hawaii and the neighboring 8-meter telescope at the Gemini North Observatory. Adaptive optics is a technique that removes the blurring of atmospheric turbulence, creating images as sharp as would be obtained from space-based telescopes.

"These results came about from a collaborative effort between two very large telescopes with adaptive optics capability, Gemini and Keck," says astronomer Chadwick A. Trujillo of the Gemini Observatory, a co-author of the paper. "At both telescopes, the science data were collected from only about a half an hour of images taken over many nights. Only this unusual 'quick look' scheduling could have produced these unique results. At most telescopes, the whole night is given to a single observer, which could not have produced this science."

The two telescopes observed Titan on 82 nights. On 15 nights, the images revealed distinctive bright clouds-two dozen in all-at midlatitudes in the southern hemisphere. The clouds usually popped up quickly, and generally had disappeared by the next day. "We have several observations where on one night, we don't see a cloud, the next night we do, and the following night it is gone," Roe says.

Some of the clouds stretched as much as 2,000 km across the 5,150 km diameter moon. "An equivalent cloud on Earth would cover from the east coast to the west coast of the United States," Roe says. Although the precise altitude of the clouds is not known, they fall somewhere between 10 km and 35 km above the surface, within Titan's troposphere (most cloud activity on the earth is also within its troposphere).

Notably, all of the clouds were located within a relatively narrow band at around 40 degrees south latitude, and most were clustered tightly near 350 degrees west longitude. Both their sporadic appearance and their specific geographic location led the researchers to conclude that the clouds were not arising from the regular convective overturn of the atmosphere due to its heating by the sun (which produces the cloud cover across the moon's southern pole) but, rather, that some process on the surface was creating the clouds.

"If these clouds were due only to the global wind pattern, what we call general circulation, there's no reason the clouds should be linked to a single longitude. They'd be found in a band around the entire moon," Roe says.

Another possible explanation for the clouds' patchy formation is variation in the albedo, or brightness, of the surface. Darker surfaces absorb more sunlight than lighter ones. The air above those warmer spots would be heated, then rise and form convective clouds, much like thunderstorms on a summer's day on Earth. Roe and his colleagues, however, found no differences in the brightness of the surface at 40 degrees south latitude. Clouds can also form over mountains when prevailing winds force air upward, but in that case the clouds should always appear in the identical locations. "We see the clouds regularly appear in the same geographic region, but not always in the exact same location," says Roe.

The other way to make a cloud on Titan is to raise the humidity by directly injecting methane into the atmosphere, and that, the scientists say, is the most likely explanation here.

Exactly how the methane is being injected is still unknown. It may seep out of transient cracks on the surface, or bubble out during the eruption of icy cryovolcanoes.

Although no such features have yet been observed on the moon, Roe and his colleagues believe they may be common. "We think there are numerous sources all over the surface, of varying size, but most below the size that we could see with our instruments," he says.

One large feature near 350 degrees west longitude is probably creating the clump of clouds that forms in that region, while also humidifying the band at 40 degrees latitude, Roe says, "so you end up creating areas where the humidity is elevated by injected methane, making it easier for another, smaller source to also generate clouds. They are like weather fronts that move through. So we are seeing weather, on another planet, with something other than water. With methane. That's cool. It's better than science fiction."

Images are available upon request. For advance copies of the embargoed paper, contact the AAAS Office of Public Programs, (202) 326-6440 or scipak@aaas.org. ###

Contact: Dr. Henry G. Roe (626) 395-8708 hroe@gps.caltech.edu Kathy Svitil Caltech Media Relations (626) 395-8022 ksvitil@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media


NASA Grant Will Fund New Research on Mars with the Spirit and Opportunity Rovers

PASADENA, Calif.—When it comes to longevity, the Spirit and Opportunity rovers on Mars are giving some real competition to the pink bunny from those battery advertisements. The two rovers in a couple of months will celebrate their second anniversary on the red planet, even though their original missions were only 90 days.

With no end to the rover missions in sight, NASA has selected a planetary scientist at the California Institute of Technology to see if he and his team can learn new things about the ground the rovers are currently rolling on. With any luck, the researchers will uncover further evidence about water or water vapor once present on the planet's surface.

Oded Aharonson, assistant professor of planetary science at Caltech, was chosen as part of the Mars Exploration Rover Participating Scientist Program. Aharonson and seven other investigators have been selected from 35 applicants. According to NASA, the eight successful proposals were chosen on the basis of merit, relevance, and cost-effectiveness. Aharonson and the seven other finalists will become official members of the Mars Exploration Rovers science team, according to Michael Meyer, lead scientist for the Mars Exploration Program.

"Spirit and Opportunity have exceeded all expectations for their longevity on Mars, and both rovers are in good position to continue providing even more great science," said Meyer. "Because of this, we want to add to the rover team that collectively chooses how to use the rover's science instruments each day."

Aharonson's proposal is formally titled "Soil Structure and Stratification as Indicators of Aqueous Transport at the MER Landing Sites." In nontechnical talk, that means the researchers will be using the rovers to look at Martian dirt and rocks to see if liquid water has ever altered them.

The search for evidence of running water on Mars has been a "Holy Grail" for the entire exploratory program. Although the details of how life originally evolved are still largely conjectural, experts think that liquid water is required for the sort of chemistry thought to be conducive to the emergence of life as we know it.

Although there is no liquid water on the Martian surface at present, Opportunity has found geological evidence that water formerly flowed there. Thus, Aharonson will be looking for the telltale signatures of ancient as well as more recent aqueous transport and alteration.

"My experiments would normally take a couple of weeks, but it's not clear exactly how much time we'll devote to them," Aharonson said. "If we find something interesting, it could be much longer. But we might also cut the time shorter if, for example, we come upon an interesting rock we want to look at more closely."

Aharonson will work with a new Caltech faculty member, John Grotzinger, who comes from MIT as the Fletcher Jones Professor of Geology and is already a member of the rovers' science team. In addition, Caltech postdoctoral researcher Deanne Rogers will be involved in the research.

Spirit and Opportunity have been exploring sites on opposite sides of Mars since January 2004. They have found geological evidence of ancient environmental conditions that were wet and possibly habitable. They completed their primary missions three months later and are currently in the third extension of thse missions. NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, manages the Mars Exploration Rover project for NASA's Science Mission Directorate.


Robert Tindol

Interdisciplinary Scientists Propose Paradigm Shift in Robotic Space Exploration

PASADENA, Calif.—Just ask any geologist. If you're studying the history of a planet and the life forms that may have lived on it, the really good places to look are rugged terrains like canyons and other areas where water, igneous activity, wind, and seismic rumblings have left their respective marks. Flat is not so good.

But when it comes to exploring other worlds, like Mars, the strategy for ground-based reconnaissance thus far has been to land in relatively smooth places so the spacecraft won't slam into something vertical as it touches down or as it rolls to a stop in its protective airbags. In the cases of the Mars landings—and all soft landings on other planets and moons, for that matter—flat is definitely good.

To address this disconnect, a team of interdisciplinary scientists from the California Institute of Technology, the University of Arizona, and the U.S. Geological Survey has unveiled a proposal to make core changes in the robotic exploration of the solar system. In addition to spaceborne orbiters, the "new paradigm" would involve sending orbiter-guided blimps (or other airborne agents) carrying instruments such as optical and thermal cameras, ground-penetrating radar, and gas and humidity sensors to chosen areas of a planet, as well as using herds of small, robotic, ground-based explorers.

The ground explorers would communicate with the airborne and/or spaceborne agents, coupled with innovative software for identification, characterization, and integration of various types of spatial and temporal information for in-transit comparative analysis, hypothesis formulation, and target selection. This would lead to a "tier-scalable perspective," akin to the approach used by field geologists to solve a complicated geological puzzle.

Writing in an upcoming issue of the journal Planetary and Space Science, the researchers propose "a fundamentally new scientific mission concept for remote planetary surface and subsurface reconnaissance." The new approach will be cost-effective, in that it can include greater redundancy and thus greater assurance of mission success, while significantly allowing unconstrained science-driven missions to uncover transient events (for example, evidence of liquid water) and possible signs of life on other worlds.

"We're not trying to take anything away from the successful landings on Mars, Venus, and Titan, nor the orbital-based successes to most of the planetary bodies of the solar system," says Wolfgang Fink, a physicist who is serving a multiyear appointment as a visiting associate at Caltech. "But we think our tier-scalable mission concept will afford greater opportunity and freedom to identify and home in on geological and potential astrobiological 'sweet spots.'"

The new paradigm is spearheaded by Fink and by James Dohm, a planetary geologist in the Department of Hydrology and Water Resources at the University of Arizona. The team effort includes Mark Tarbell, who is Fink's associate in Caltech's Visual and Autonomous Exploration Systems Research Lab; Trent Hare of the U.S. Geological Survey office in Flagstaff; and Victor Baker, also of the University of Arizona.

"The paradigm-changing mission concept is by no means accidental," Dohm explains. "Our interdisciplinary team of scientists has evolved the concept through the profound realization of the requirement to link the various disciplines to optimally go after prime targets such as those environments that have high potential to contain life or far-reaching geological, hydrological, and climatological records."

Fink, for his part, is an expert in imaging systems, autonomous control, and science analysis systems for space missions. Dohm is a planetary and terrestrial field geologist, who, based on his experience, has a keen sense of how and where to study a terrain, be it earthly or otherworldly.

Dohm, who has performed geological investigations of Mars from local to global scales for nearly twenty years, says the study of the geology of other planets has been fruitful yet frustrating. "You're not able to verify the remote-based information in person and uncover additional information that would lead to an improved understanding of the geologic, water, climate, and possible biologic history of Mars.

"Ideally, you'd want to look at remote-based geological information while you walked with a rock hammer in hand along the margin that separates a lava flow from putative marine deposits, exploring possible water seeps and moisture embankments within the expansive canyon system of Valles Marineris that would extend from Los Angeles to New York, characterizing the sites of potential ancient and present hydrothermal activity, climbing over the ancient mountain ranges, gathering diverse rock types for lab analysis, and so on.

"We think we've devised a way to perform the geologic approach on other planets in more or less the way geologists do here on Earth."

Even though orbiting spacecraft have successfully collected significant data through instrument suites, working hypotheses are yet to be confirmed. In the case of Mars, for example, it is unknown whether the mountain ranges contain rock types other than volcanic, or whether sites of suspected hydrothermal activity are indeed hydrothermal environments, or whether the most habitable sites actually contain signs of life. These questions may be addressed through the "new paradigm."

The interdisciplinary collaboration provides the wherewithal for thinking out of the box because the researchers are, well, out of the box. "We're looking at a new way to cover lots of distance, both horizontally and vertically, and new, automated, ways to put the gathered information together and analyze it-perhaps before it even comes back to Earth," Fink says.

Just how innovative would the missions be? The tier-scalable paradigm would vary according to the conditions of the planet or moon to be studied, and, significantly, to the specific scientific goals. "I realize that several missions in the past have been lost during orbital insertion, but we think that the worst perils for a robotic mission are in getting the instruments to the ground successfully," Fink says. "So our new paradigm involves missions that are not crippled if a single rover is lost."

In the case of Mars, a typical mission would deploy maneuverable airborne agents, such as blimps, equipped with existing multilayered information (geologic, topographic, geomorphic, geophysical, hydrologic, elemental, spectral, etc.) that would acquire and ingest information while in transit from various altitudes.

While floating and performing smart reconnaissance (that is, in-transit analysis of both the existing and newly acquired spatial and temporal information in order to formulate working hypotheses), the airborne agents would migrate toward sweet spots, all the while communicating with the orbiter or orbiters. Once the sweet spots are identified, the airborne agents would be in position to deploy or help guide orbiter-based deployment of ground-based agents for further analysis and sampling.

"Knowing where you are in the field is extremely critical to the geologic reconnaissance approach," Dohm says.

"Thanks to the airborne perspective and control, this would be less of a concern within our tier-scalable mission concept, as opposed to, for example, the case of an autonomous long-range rover on Mars that is dependent on visible landmarks to account for its current location," Fink adds.

The robotic ground-based agents would be simpler and smaller than the rovers currently being sent to Mars. Though not necessarily any more robust than the current generation, the agents would be able to take on rocky and steep terrains simply because there would be several with varying degrees of intelligence and thereby, would be somewhat expendable. If a single rover turned over on a steep terrain, for example, the mission would not be lost.

The stationary sensors would be even more expendable and could collect information that can be transmitted back to the airborne units and orbiter or orbiters for comparative analysis of the spatial and temporal information for redeployment if the units are mobile, or for additional deployment, such as placing a drill rig in a prime sweet spot in order to sample possible near-surface groundwater or even living organisms.

And these are just the ideas for planets such as Mars. The researchers also have multi-tier mission plans for other planets and moons, where the conditions could be much more harsh.

In the case of Titan, one of Saturn's moons with an atmosphere one and a half times as thick as Earth's, autonomously controlled airships would be ideal for exploration, rendering Titan perfectly suited for deployment of a three-tiered system consisting of orbiters, blimps, and both mobile and immobile ground-agents, especially in light of the even longer communication time lag than in the case of Mars.

On Io, the extremely volcanically active atmosphere-void moon of Jupiter, in contrast, the orbiter-guided deployment of mobile ground-agents and immobile sensors would be a productive way of performing ground-based reconnaissance, capturing and studying active volcanism beyond Earth. In this case, the three-tiered system of spaceborne-, airborne-, and ground-level would be reduced to the two tiers of spaceborne- and ground-level.

The advantages of redundancy and data compilation would still provide huge operational advantages on Io: if some of the ground-agents are lost to volcanic hazards, for example, those that remain can still identify, map out, and transmit back significant information, thus rendering the overall mission a success. The tier-scalable concept is equally applicable to investigating active processes on Earth that may put scientists in harm's way, including volcanism, flooding, and other hazardous natural and human-induced environmental conditions.

The researchers are already in the test-bed stage of trying out advanced hardware and software designs. But Fink says that much of the technology is already available, and even that which is not currently available (the software, primarily) is quite attainable. Further design, testing, and "ground-truthing" are required in diverse environments on Earth. Fink and Dohm envision field camps where the international community of scientists, engineers, mission architects, and others can design and test optimal tier-scalable reconnaissance systems for the various planetary bodies and scientific objectives.

"After all, the question is what do you do with your given payload," Fink says. "Do you land one big rover, or would you rather deploy airborne agents and multiple smaller ground-agents that are commanded centrally from above, approximating a geologist performing tier-scalable reconnaissance?

"We think the 'tunable' range of autonomy with our tier-scalable mission concept will be of interest to NASA and the other agencies worldwide that are looking at the exploration of the Solar System."

The title of the Planetary and Space Science article is "Next-generation robotic planetary reconnaissance missions: A paradigm shift."

Robert Tindol

Tenth Planet Has a Moon

PASADENA, Calif. --The newly discovered 10th planet, 2003 UB313, is looking more and more like one of the solar system's major players. It has the heft of a real planet (latest estimates put it at about 20 percent larger than Pluto), a catchy code name (Xena, after the TV warrior princess), and a Guinness Book-ish record of its own (at about 97 astronomical units-or 9 billion miles from the sun-it is the solar system's farthest detected object). And, astronomers from the California Institute of Technology and their colleagues have now discovered, it has a moon.

The moon, 100 times fainter than Xena and orbiting the planet once every couple of weeks, was spotted on September 10, 2005, with the 10-meter Keck II telescope at the W.M. Keck Observatory in Hawaii by Michael E. Brown, professor of planetary astronomy, and his colleagues at Caltech, the Keck Observatory, Yale University, and the Gemini Observatory in Hawaii. The research was partly funded by NASA. A paper about the discovery was submitted on October 3 to Astrophysical Journal Letters.

"Since the day we discovered Xena, the big question has been whether or not it has a moon," says Brown. "Having a moon is just inherently cool-and it is something that most self-respecting planets have, so it is good to see that this one does too."

Brown estimates that the moon, nicknamed "Gabrielle"-after the fictional Xena's fictional sidekick-is at least one-tenth of the size of Xena, which is thought to be about 2700 km in diameter (Pluto is 2274 km), and may be around 250 km across.

To know Gabrielle's size more precisely, the researchers need to know the moon's composition, which has not yet been determined. Most objects in the Kuiper Belt, the massive swath of miniplanets that stretches from beyond Neptune out into the distant fringes of the solar system, are about half rock and half water ice. Since a half-rock, half-ice surface reflects a fairly predictable amount of sunlight, a general estimate of the size of an object with that composition can be made. Very icy objects, however, reflect a lot more light, and so will appear brighter-and thus bigger-than similarly sized rocky objects.

Further observations of the moon with NASA's Hubble Space Telescope, planned for November and December, will allow Brown and his colleagues to pin down Gabrielle's exact orbit around Xena. With that data, they will be able to calculate Xena's mass, using a formula first devised some 300 years ago by Isaac Newton.

"A combination of the distance of the moon from the planet and the speed it goes around the planet tells you very precisely what the mass of the planet is," explains Brown. "If the planet is very massive, the moon will go around very fast; if it is less massive, the moon will travel more slowly. It is the only way we could ever measure the mass of Xena-because it has a moon."

The researchers discovered Gabrielle using Keck II's recently commissioned Laser Guide Star Adaptive Optics system. Adaptive optics is a technique that removes the blurring of atmospheric turbulence, creating images as sharp as would be obtained from space-based telescopes. The new laser guide star system allows researchers to create an artificial "star" by bouncing a laser beam off a layer of the atmosphere about 75 miles above the ground. Bright stars located near the object of interest are used as the reference point for the adaptive optics corrections. Since no bright stars are naturally found near Xena, adaptive optics imaging would have been impossible without the laser system.

"With Laser Guide Star Adaptive Optics, observers not only get more resolution, but the light from distant objects is concentrated over a much smaller area of the sky, making faint detections possible," says Marcos van Dam, adaptive optics scientist at the W.M. Keck Observatory, and second author on the new paper.

The new system also allowed Brown and his colleagues to observe a small moon in January around 2003 EL61, code-named "Santa," another large new Kuiper Belt object. No moon was spotted around 2005 FY9-or "Easterbunny"-the third of the three big Kuiper Belt objects recently discovered by Brown and his colleagues using the 48-inch Samuel Oschin Telescope at Palomar Observatory. But the presence of moons around three of the Kuiper Belt's four largest objects-Xena, Santa, and Pluto-challenges conventional ideas about how worlds in this region of the solar system acquire satellites.

Previously, researchers believed that Kuiper Belt objects obtained moons through a process called gravitational capture, in which two formerly separate objects moved too close to one another and become entrapped in each other's gravitational embrace. This was thought to be true of the Kuiper Belt's small denizens-but not, however, of Pluto. Pluto's massive, closely orbiting moon, Charon, broke off the planet billions of years ago, after it was smashed by another Kuiper Belt object. Xena's and Santa's moons appear best explained by a similar origin.

"Pluto once seemed a unique oddball at the fringe of the solar system," Brown says. "But we now see that Xena, Pluto, and the others are part of a diverse family of large objects with similar characteristics, histories, and even moons, which together will teach us much more about the solar system than any single oddball ever would."

Brown's research is partly funded by NASA.

For more information on the discovery and on Xena, visit www.gps.caltech.edu/~mbrown/planetlila


Contact: Kathy Svitil (626) 395-8022 ksvitil@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media



Caltech and Cisco Team to Advance Development of FAST Network

PASADENA, Calif.- Most people-even broadband users-are now familiar with the relatively slow speed of downloading large files off the Internet. Imagine, then, the frustration of scientists working with files a million times larger than the average user ever encounters.

This was the difficulty physicists posed to Steven Low, associate professor of computer science and electrical engineering at the California Institute of Technology, three years ago.

Low, an expert in the field of advanced networking, and his team devised a scheme for moving immense amounts of data across high-speed networks at breakneck speed. Called FAST TCP, for Fast Active queue management Scalable Transmission Control Protocol, the new algorithm has been used by physicists to shatter data transfer speed records for the last two years.

In order to push this work further, the Low group needs an experimental facility to test their ideas. Available test beds are either production wide-area networks that connect physics laboratories around the world such as CERN in Geneva, the Stanford Linear Accelerator Center at Stanford, and Fermi Lab in Chicago or an emulated network in a laboratory.

The first option offers a realistic environment for developing network protocols, but is almost impossible to reconfigure and monitor closely for experimental purposes; the second is the exact opposite.

The Low group proposed to build the "WAN in Lab," a wide-area network with all equipment contained in a laboratory, so that the network would be under the complete control of the experimenters. This unique facility might be described as a "wind tunnel" for networking. And, like FAST TCP, is a collaboration with physicist Harvey Newman and mathematician John Doyle of Caltech.

Thanks to a $1.1 million list value equipment donation from Cisco's Academic Research and Technology Initiatives (ARTI) group, this new lab is quickly becoming a reality. Last-minute work is being done, and the team expects the new facility to be ready this fall.

The ARTI group is part of Cisco's Chief Development Office and is focused on engaging with the research and education community in fostering innovation, research and development opportunities for advanced networking infrastructure in research and education networks and related projects around the world.

The relationship between Caltech's FAST TCP project and Cisco's ARTI group began in the spring of 2002, when Low received a research grant from Cisco's University Research Program (URP). The relationship continued with additional URP funding, which eventually led to this most recent equipment donation for the WAN in Lab project. Throughout the development process for the WAN in Lab, Cisco engineers worked directly with Caltech researchers in designing and implementing the WAN in Lab infrastructure.

"This is an exciting time for us," says Low. "Seeing how scientists actually use our protocol will help us refine our approach. The WAN in Lab is a unique experimental infrastructure critical for exploring many new ideas."

According to Bob Aiken, director of the ARTI group, FAST TCP is designed to address the difficult challenge of meeting the needs of the next generation of networking and computational science researchers. Multidisciplinary research is increasing on a global basis, and the networking requirements of these data-intensive applications require global-scale grids and networks.

"FAST TCP and WAN in Lab, in conjunction with existing research networks with which they connect and peer, such as National Lambda Rail, allow for the measurement of real optical networks and protocols in a tightly controlled environment with true global WAN distances," said Aiken. "Cisco is pleased to have been able to work with Steven Low and Caltech on these projects over the last several years, and look forward to an ongoing research relationship."

The remarkable thing about FAST TCP is that it uses the existing Internet. The secret is in software at the sending point that parses the data into network-compatible packets that avoid typical Internet congestion as they weave their way to their ultimate destination.

In a demonstration at the Supercomputing Bandwidth Challenge last November, FAST TCP set a new world record for sustained data transfer of 101 gigabits per second from Pittsburgh to various research facilities around the world. This phenomenal rate is equivalent to transmitting the entire contents of the Library of Congress in 15 minutes.

Future work in the new WAN in Lab funded by the Cisco equipment donation, the National Science Foundation, the Army Research Office, and Corning will focus on making FAST TCP more robust. The WAN in Lab will also be useful for testing new network theories developed at Caltech and other universities.

Harvey Newman, Caltech physics professor and board chair of the U.S. CMS (the collaborative Compact Muon Solenoid experiment being conducted by U.S. scientists at CERN), expects joint research to lead to even greater performance.

"This new facility will allow scientists and network engineers to collaborate on designing networks that will revolutionize data-intensive grid computing and other forms of scientific computing over the next decade," said Newman. "This will be an enabling force in the next round of scientific discoveries expected when the Large Hadron Collider begins operation in 2007."

The creation of an interdisciplinary research infrastructure based on information is a major focus of the Information Science and Technology (IST) initiative at Caltech. The $1.1 million Cisco equipment donation is part of a $100 million effort to make IST a reality.

### Contact: Jill Perry (626) 395-3226 jperry@caltech.edu Visit the Caltech Media Relations website at: http://pr.caltech.edu/media