Old Caltech Telescope Yields New Science

PASADENA, Calif. - Meet Sarah Horst, throwback. The planetary science major, a senior at the California Institute of Technology, spent six months engaged in a bit of old-time telescope observing. The work led to some breakthrough research about Saturn's moon Titan, and indirectly led to funding for a new telescope at Caltech's Palomar Observatory.

Horst, 21, was looking for a part-time job in the summer of her sophomore year, and was hired by Mike Brown, an associate professor of planetary astronomy. Brown and graduate student Antonin Bouchez knew there had been previous evidence of "weather" on Titan in the form of clouds. But that evidence was elusive. "Someone would look one year and think they saw a cloud, then look the next year and not see a cloud," explains Brown. "What we were after was a way to look at Titan, night after night after night."

The problem, of course, is that all of the large telescopes like Keck are incredibly busy, booked by astronomers from around the world who use the precious time for their own line of research. So Brown and Bouchez knew that obtaining large amounts of time for a single project like this was not going to happen.

The solution: Use an old teaching telescope--the hoary 14-inch Celestron telescope located on top of Caltech's Robinson Lab--to do cutting edge science that couldn't be done at the largest telescopes in the world, in Hawaii.

Though the power of the Robinson telescope is weak, and light pollution from Pasadena strong, which prevents imaging the actual clouds, the light reflecting from clouds could be imaged (the more clouds, the more light that's reflected). All that was needed was someone who could come night after night and take multiple images.

Enter Horst, the self-described "lowly undergraduate." For months, Horst spent her evenings in Robinson. "I did the setup, which involved a wheel that contained four light filters," she explains. Each filter would capture a different wavelength of light. Software switched the filters; all she had to do, says Horst, was to orientate and focus the telescope.

Now, modern-day astronomers have it relatively easy when using their telescope time. Sure they're up all night, but they sit on a comfortable chair in a warm room, hot coffee close at hand, and do their observing through a computer monitor that's connected to a telescope.

Not Horst. She did it the old way, in discomfort. "A lot of times in December or January I'd go in late at night, and it would be freezing," says Horst, who runs the 800-meter for the Caltech track team. "I'd wrap myself up in blankets." Horst spent hours in the dark, since the old dome itself had to be dark. "I couldn't even study," she says, "although sometimes I tried to read by the light of the moon."

A software program written by Bouchez plotted the light intensity from each image on a graph. When a particular image looked promising, Bouchez contacted Brown. As a frequent user of the Keck Observatory, which is powerful enough to take an image of the actual clouds, Brown was able to call colleagues who were using the Keck that night and quickly convince them that something exciting was going on. "It only took about ten minutes to get a quick image of Titan," says Brown. "The funny part was having to explain to them that we knew there were clouds because we had seen the evidence in our 14-inch telescope in the middle of the L.A. basin."

The result was "Direct Detection of Variable Tropospheric Clouds Near Titan's South Pole," which appeared in the December 19 journal Nature. It included this acknowledgement: "We thank . . . S. Horst for many nights of monitoring Titan in the cold."

The paper has helped Brown obtain the funding to build a new 24-inch custom-built telescope. It will be placed in its own building atop Palomar Mountain, on the grounds of Caltech's existing observatory. It's also roboticized; Brown will control the scope from Pasadena via a computer program he has written.

He'll use it for further observation of Titan and for other imaging, as well, such as fast-moving comets. "Most astronomy is big," notes Brown; "big scopes looking at big, unchanging things, like galaxies. I like to look at changing things, which led to this telescope."

What really made this project unique, though, according to Brown, is the Robinson scope. "Sarah was able to do something with this little telescope in Pasadena that no one in the world, on any of their larger professional telescopes on high, dark mountaintops, had been able to do," he says. "Sometimes a good idea and stubbornness are better than the largest telescope in town."

For Horst, while the work wasn't intellectually challenging--"a trained monkey could have done it," she says with a laugh--it was, nonetheless, "a cool project. Everything here is so theoretical and tedious, and so classroom orientated. So in that way it was a nice experience and reminded me what real science was about."

Exclude from News Hub: 

Atmospheric researchers present new findingson the natural hydrogen cycle

Two months after a pivotal study on the potential impact of a future hydrogen economy on the environment, further evidence is emerging on what would happen to new quantities of hydrogen released into the atmosphere through human activity.

In an article appearing in the August 21 issue of the journal Nature, a group of researchers from the California Institute of Technology and other institutions reports results of a study of the atmospheric chemical reactions that produce and destroy molecular hydrogen in the stratosphere. Based on these results, the report concludes that most of the hydrogen eliminated from the atmosphere goes into the ground, and therefore that the scientific community will need to turn its focus toward soil destruction of hydrogen in order to accurately predict whether human emissions will accumulate in the air. The researchers reached this conclusion through careful measurement of the abundance of a rare isotope of hydrogen known as deuterium. It has long been known that atmospheric molecular hydrogen is anomalously rich in deuterium, but it was unclear why. The only reasonable explanation seemed to be that atmospheric hydrogen is mostly destroyed by chemical reactions in the air, and that those reactions are relatively slow for deuterium-rich hydrogen, so it accumulates like salt in an evaporating pan of water.

If correct, this would mean that oxidizing atmospheric trace gases control the natural hydrogen cycle and that soils are relatively unimportant. The Caltech group discovered that one of the main natural sources of atmospheric hydrogen--the breakdown of methane--is actually responsible for the atmosphere's enrichment in deuterium. This result implies that reactions with atmospheric oxidants are relatively unimportant to the hydrogen cycle, and that uptake by soils is really in the driver's seat.

This issue is important because of the potential for a future hydrogen economy to leak hydrogen into the air--a scenario explored in the earlier study published in Science. Such leaks of hydrogen seem likely at present, and if they occur must either be mitigated by some natural processes that destroy hydrogen, or else the leaked hydrogen will accumulate in the atmosphere. If the latter, this hydrogen would inevitably find its way into the stratosphere and participate in chemical reactions that damage the ozone layer. The key to predicting how this chain of events will unfold is knowing what natural processes destroy hydrogen, and to what extent they might counteract increases in human emissions.

Hydrogen is a highly reactive element, but the question of when and where it reacts, and under what circumstances, is difficult to know precisely. This question is simplified in the stratosphere, where it's easier to single out and understand specific reactions. According to John Eiler, an assistant professor of geochemistry at the California Institute of Technology and an author of both the new paper and the June paper in Science, the new data were gathered from air samples gathered in the stratosphere with one of the high-flying ER-2 planes operated by the NASA Dryden Flight Research Center in the Mojave Desert.

The ER-2, a reconfigured U-2 spy plane, is part of NASA's Airborne Research Program and is crucial to atmospheric chemists interested in directly collecting stratospheric samples for air-quality research. The air samples that were collected in the ER-2 in various locales show that there is an extreme enrichment of deuterium in stratospheric hydrogen.

"We wanted to look at hydrogen in the stratosphere because it's easy to study the production of hydrogen from methane separate from other influences," Eiler explains. "It may seem odd to go to the stratosphere to understand what's happening in the ground, but this was the best way to get a global perspective on the importance of soils to the hydrogen cycle."

With precise information on the deuterium content of hydrogen formed from methane, the researchers were able to calculate that the soil uptake of hydrogen is as high as 80 percent. It is suspected that this hydrogen is used by soil-living microbes to carry on their biological functions, although the details of this process are poorly understood and have been the subject of only a few previous studies.

It seems likely that the hydrogen taken up by soils is relatively free of environmental consequences, but the question still remains how much more hydrogen the soil can consume. If future use of hydrogen in transportation results in a significant amount of leakage, then soil uptake must increase dramatically or it will be inadequate to cleanse the released hydrogen from the atmosphere, Eiler says.

"An analogy would be the discovery that trees and other plants get rid of some of the carbon dioxide that cars emit, but by no means all of it," he says. "So the question as we look toward a future hydrogen economy is whether the microbes will be able to eat the hydrogen fast enough."

The research was funded in part by the National Science Foundation. Bruce Doddridge, program director in the NSF's division of atmospheric science, said, "This carefully conducted research investigating the natural chemistry of sources and sinks affecting the abundance of molecular hydrogen in the troposphere results in the most accurate information to date, and appears to account for the tropospheric deuterium excess previously observed.

"A more accurate molecular hydrogen budget may have important implications as global fuel technology shifts its focus from fossil fuels to other sources," Doddridge added.

The lead author of the paper is Thom Rahn, a former postdoctoral scholar of Eiler's who is now affiliated with Los Alamos National Laboratory. The other authors are Paul Wennberg, a professor of atmospheric chemistry and environmental engineering science at Caltech; Kristie A. Boering and Michael McCarthy, both of UC Berkeley; Stanley Tyler of UC Irvine; and Sue Schauffler of the National Center for Atmospheric Research in Boulder, Colorado.

In addition to the NSF, other supporters of the research were the Davidow Fund and General Motors Corp., the David and Lucile Packard Foundation, the NASA Upper Atmosphere Research Program, and the National Center for Atmospheric Research.

Robert Tindol

Gravity Variations Predict Earthquake Behavior

PASADENA, Calif. — In trying to predict where earthquakes will occur, few people would think to look at Earth's gravity field. What does the force that causes objects to fall to the ground and the moon to orbit around the earth have to do with the unpredictable ground trembling of an earthquake?

Now, researchers at the California Institute of Technology have found that within subduction zones, the regions where one of the earth's plates slips below another, areas where the attraction due to gravity is relatively high are less likely to experience large earthquakes than areas where the gravitational force is relatively low.

The study, by Caltech graduate student Teh-Ru Alex Song and Associate Professor of Geophysics Mark Simons, will appear in the August 1 issue of the journal Science.

Until now, says Simons, researchers studying earthquake behavior generally took one of four approaches: 1) analyzing seismograms generated by earthquakes, 2) studying frictional properties of various types of rock in the laboratory or in the field, 3) measuring the slow accumulation of strain between earthquakes with survey techniques, and 4) large-scale dynamic models of earthquakes and tectonics.

Instead of using one of these approaches, Song and Simons considered variations in the gravity field as a predictor of seismic behavior.

A gravity anomaly occurs when gravity is stronger or weaker than the regional average. For example, a mountain or an especially dense rock would tend to increase the nearby gravity field, creating a positive anomaly. Likewise, a valley would tend to create a negative anomaly.

Song and Simons examined existing data from satellite-derived observations of the gravity field in subduction zones. Comparing variations in gravity along the trenches with earthquake data from two different catalogs going back 100 years, the team found that, within a given subduction zone, areas with negative gravity anomalies correlated with increased large earthquake activity. Areas with relatively high gravity anomalies experienced fewer large earthquakes.

In addition, most of the energy released in earthquakes was in areas of low gravity. The team looked at subduction zone earthquakes with magnitude greater than 7.5 since 1976. They found that of the total energy released in those earthquakes, 44 percent came from regions with the most strongly negative gravity anomalies, though these regions made up only 14 percent of the total area.

Song and Simons also compared the location of large earthquakes with the topography of the subduction zones, finding that areas of low topography (such as basins) also corresponded well to areas with low gravity and high seismic activity.

So why would gravity and topography be related to seismic activity?

One possible link is via the frictional behavior of the fault. When two plates rub up against each other, friction between the plates makes it harder for them to slide. If the friction is great enough, the plates will stick. Over long periods of time, as the stuck plates push against each other, they may deform, creating spatial variations in topography and gravity.

In addition to deforming the plates, friction causes stress to build up. When too much stress builds up, the plates will suddenly jump, releasing the strain in the sometimes violent shaking of an earthquake.

If there were no friction between the plates, they would just slide right by each other smoothly, without bending or building up the strain that eventually results in earthquakes.

So in subduction zones, areas under high stress are likely to have greater gravity and topography anomalies, and are also more likely to have earthquakes.

Though this account provides a basic explanation for a rather complicated and unintuitive phenomenon, it is a simplified view, and Song and Simons would like to do more work to refine the details of the relation between the gravity field and large earthquakes.

The gravity anomalies the team considered take a long time to build up, and change very little over timescales up to at least 1 million years. Short-term events such as earthquakes do change the gravity field as the earth's plates suddenly move, but those variations are small compared with the long-term anomalies, which are on the order of 4 x 10^-4 m/s^2.

Because topography and gravity variations persist over periods of time much longer than the typical time between earthquakes, 100 to 1,000 years, large earthquakes should be consistently absent from areas with large positive gravity anomalies, say Song and Simons.

"This study makes a strong connection between long-term tectonic behavior and short-term seismic activity," says Simons, "and thereby provides a class of new observations for understanding earthquake dynamics."

Though no one can tell when or where the next major earthquake will occur, Global Positioning System measurements can show where strain is accumulating. Simons hopes to use such measurements to test the prediction that areas with high gravity will have low strain, and vice versa. The team points out that although large earthquakes occur where gravity and topography are low, there are low-gravity areas in subduction zones with no seismic activity. Furthermore, the research concentrates on subduction zones, and so makes no predictions about other types of faults.

Nonetheless, within a subduction zone known to be earthquake-prone, Simons believes earthquakes are more likely to occur in low-gravity zones. High gravity areas do tend to have few earthquakes. So while the research does not offer a way to predict where earthquakes will happen, it can predict where they won't happen, says Simons.

MEDIA CONTACT: Ernie Tretkoff (626) 395-8733 tretkoff@caltech.edu

Visit the Caltech media relations web site: http://pr.caltech.edu/media


Hydrogen economy might impactEarth's stratosphere, study shows

According to conventional wisdom, hydrogen-fueled cars are environmentally friendly because they emit only water vapor -- a naturally abundant atmospheric gas. But leakage of the hydrogen gas that can fuel such cars could cause problems for the upper atmosphere, new research shows.

In an article appearing this week in the journal Science, researchers from the California Institute of Technology report that the leaked hydrogen gas that would inevitably result from a hydrogen economy, if it accumulates, could indirectly cause as much as a 10-percent decrease in atmospheric ozone. The researchers are physics research scientist Tracey Tromp, Assistant Professor of Geochemistry John Eiler, planetary science professor Yuk Yung, planetary science research scientist Run-Lie Shia, and Jet Propulsion Laboratory scientist Mark Allen.

If hydrogen were to replace fossil fuel entirely, the researchers estimate that 60 to 120 trillion grams of hydrogen would be released each year into the atmosphere, assuming a 10-to-20-percent loss rate due to leakage. This is four to eight times as much hydrogen as is currently released into the atmosphere by human activity, and would result in doubling or tripling of inputs to the atmosphere from all sources, natural or human.

Because molecular hydrogen freely moves up and mixes with stratospheric air, the result would be the creation of additional water at high altitudes and, consequently, an increased dampening of the stratosphere. This in turn would result in cooling of the lower stratosphere and disturbance of ozone chemistry, which depends on a chain of chemical reactions involving hydrochloric acid and chlorine nitrate on water ice.

The estimates of potential damage to stratospheric ozone levels are based on an atmospheric modeling program that tests the various scenarios that might result, depending on how much hydrogen ends up in the stratosphere from all sources, both natural and anthropogenic.

Ideally, a hydrogen fuel-cell vehicle has no environmental impact. Energy is produced by combining hydrogen with oxygen pulled from the atmosphere, and the tailpipe emission is water. The hydrogen fuel could come from a number of sources (Iceland recently started pulling it out of the ground). Nuclear power could be used to generate the electricity needed to split water, and in principle, the electricity needed could also be derived from renewable sources such as solar of wind power.

By comparison, the internal combustion engine uses fossil fuels and produces many pollutants, including soot, noxious nitrogen and sulfur gases, and the "greenhouse gas" carbon dioxide. While a hydrogen fuel-cell economy would almost certainly improve urban air quality, it has the potential unexpected consequences due to the inevitable leakage of hydrogen from cars, hydrogen production facilities, the transportation of the fuel.

Uncertainty remains about the effects on the atmosphere because scientists still have a limited understanding of the hydrogen cycle. At present, it seems likely such emissions could accumulate in the air. Such a build-up would have several consequences, chief of which would be a moistening and cooling of the upper atmosphere and, indirectly, destruction of ozone.

In this respect, hydrogen would be similar to the chlorofluorocarbons (once the standard substance used for air conditioning and refrigeration), which were intended to be contained within their devices, but which in practice leaked into the atmosphere and attacked the stratospheric ozone layer.

The authors of the Science article say that the current situation is unique in that society has the opportunity to understand the potential environmental impact well ahead of the growth of a hydrogen economy. This contrasts with the cases of atmospheric carbon dioxide, methyl bromide, CFCs, and lead, all of which were released into the environment by humans long before their consequences were understood.

"We have an unprecedented opportunity this time to understand what we're getting into before we even switch to the new technology," says Tromp, the lead author. "It won't be like the case with the internal-combustion engine, when we started learning the effects of carbon dioxide decades later."

The question of whether or not hydrogen is bad for the environment hinges on whether the planet has the ability to consume excess anthropogenic hydrogen, explains Eiler. "This man-made hydrogen will either be absorbed in the soil -- a process that is still poorly understood but likely free of environmental consequences -- or react with other compounds in the atmosphere.

"The balance of these two processes will be key to the outcome," says Eiler. "If soils dominate, a hydrogen economy might have little effect on the environment. But if the atmosphere is the big player, the stratospheric cooling and destruction of ozone modeled in this Science paper are more likely to occur.

"Determining which of these two processes dominates should be a solvable problem," states Eiler, whose research group is currently exploring the natural budget of hydrogen using new isotopic techniques.

"Understanding the effects of hydrogen on the environment now should help direct the technologies that will be the basis of a hydrogen economy," Tromp adds. "If hydrogen emissions present an environmental hazard, then recognizing that hazard now can help guide investments in technologies to favor designs that minimize leakage.

"On the other hand, if hydrogen is shown to be environmentally friendly in every respect, then designers could pursue the most cost-effective technologies and potentially save billions in needless safeguards."

"Either way, it's good for society that we have an emission scenario at this stage," says Eiler. "In past cases -- with chlorofluorocarbons, nitrogen oxides, methane, methyl bromide, carbon dioxide, and carbon monoxide -- we always found out that there were problems long after they were in common use. But this time, we have a unique opportunity to study the anthropogenic implications of a new technology before it's even a problem."

If hydrogen indeed turns out to be bad for the ozone layer, should the transition to hydrogen-fueled cars be abandoned? Not necessarily, Tromp and Eiler claim.

"If it's the best way to provide a new energy source for our needs, then we can, and probably should, do it," Tromp says.

Eiler adds, "If we had had perfect foreknowledge of the effects of carbon dioxide a hundred years ago, would we have abandoned the internal combustion engine? Probably not. But we might have begun the process of controlling CO2 emissions earlier."

Contact: Robert Tindol (626) 395-3631


Caltech planetary scientist has "modest proposal" for sending probe to Earth's core

PASADENA, Calif. - Dave Stevenson has spent his career working on "swing-by" missions to the other planets. Now he has a modest proposal he'd like to swing by some government agency with a few billion dollars in available funding.

According to Stevenson's calculations, it should be possible to send a probe all the way to Earth's core by combining several proven technologies with a few well-grounded scientific assumptions about the workings of the planet. The probe would sink straight to the core in an envelope of molten iron, sending temperature readings, compositional information, and other data along the way.

"We've spent more than $10 billion in unmanned missions to the planets," says Stevenson, who is the Van Osdol Professor of Planetary Science at the California Institute of Technology. "But we've only been down about 10 kilometers into our own planet."

The benefits to science would be significant, Stevenson says, because so little has been directly observed about the inner workings of the planet. Scientists do not know, for example, the exact composition or even the temperature of the core, and what they do know is based on inferences about seismic data accumulated during earthquakes.

Stevenson says his proposal should be attractive to the scientific community because it is of the same scale, price-wise, as planetary exploration. To date, NASA has flown unmanned missions past all the planets except Pluto (if indeed Pluto is a planet at all), has made a few highly successful soft landings on Mars, has probed the clouds of Jupiter, is getting ready to probe the atmosphere of Titan, and has sent four spacecraft into interstellar space. Sending something into the earth, Stevenson believes, will have comparable payoffs in the quest for knowledge.

"When we fly to other worlds, we are often surprised by what we find, and I think the same will be the case if we go down."

Stevenson's plan calls for a crack to be opened in the earth, perhaps with some sort of explosion-probably a nuclear bomb. According to his figures, the crack will need to be several hundred meters in length and depth, and about 30 centimeters wide, to accommodate a volume of about 100 thousand to several million tons of molten iron.

The instant the crack opens, the entire volume of iron will be dropped in, completely filling the open space. Through the sheer force of its weight, the iron will create a continuing crack that will open all the way to the planet's core 3,000 kilometers below. Anything on a smaller scale may not work; anything larger will be even more expensive, so Stevenson thinks a crack of those dimensions is about right.

"Once you set that condition up, the crack is self-perpetuating," he explains; "it's fundamentally different from drilling, where it gets harder and harder-and eventually futile-the farther you go down."

The iron will continue to fall due to gravity because it is about twice the density of the surrounding material. Riding along in the mass of liquid iron will be one or more probes made of a material robust enough to withstand the heat and pressure. The probe will perhaps be the size of a grapefruit but definitely small enough to ride easily inside the 30-centimeter crack without getting wedged.

Inside the probe will be instrumentation for data collection, which will be relayed through low-intensity mechanical waves of some sort-probably through deformations of the ball itself to send out a sort of "Morse code" of data. Because radio waves cannot propagate through Earth, this is the only way to get the data transferred.

The probe will likely operate with about 10 watts of power, and it may even be possible to replenish energy and dispense with an on-board battery by harnessing mechanical energy from the force of the fall, just as electricity can be generated from falling water.

Such a low power rating will not make it possible to generate very strong shock waves for data transmission, but strong waves may not be necessary. In fact, Stevenson further suggests that the Laser Interferometer Gravitational-Wave Observatory (LIGO) might be recalibrated in its downtime to track the falling ball.

Based on the rate the molten iron would fall due to gravity, the ball would move downward into Earth at roughly human running pace (about 10 miles per hour), meaning that the entire mission would last a few weeks.

All this may sound to some like science fiction, but Stevenson says each of the principles involved is based on sound knowledge of crack propagation, fluid dynamics, mechanical-wave propagation, and "stress states." If these things didn't already work in nature, we would have no volcanoes and poorly performing bathroom plumbing, but little to fear from a pebble shattering our windshields.

"The biggest question is how to initially open the crack," says Stevenson. "Also, there's the technological challenge of having a probe that actually does what it's supposed to do."

Stevenson says he came up with part of the title "A Modest Proposal" for his paper, which is appearing in this week's journal Nature, to have a bit of fun but at the same time to issue a serious scientific proposal. He purposely took the title from Jonathan Swift's famous essay of the same name. The Swift essay suggests that Ireland's terrible economic circumstances could be solved by people eating their own children, thereby allowing England to continue pillaging the country's resources for its own one-sided benefit.

"My proposal is not as outrageous as suggesting one should eat his own children, but still combines a serious proposal with some levity," Stevenson says. "Ninety-five percent of the scientists who read the article may laugh at an enjoyable read, but if the other five percent seriously consider the goal of probing Earth's core, then I'll be happy."

"The biggest question should not be the cost, but whether we should pursue the goal of exploring Earth's interior," he says. "That said, I'd suggest we do it if we can keep the cost under $10 billion."

Contact: Robert Tindol (626) 395-3631


Three Caltech Faculty Named to American Academy of Arts and Sciences

PASADENA, Calif. — The American Academy of Arts and Sciences has elected three California Institute of Technology faculty members as academy fellows. They are Fred C. Anson, Elizabeth Gilloon Professor of Chemistry, Emeritus; Joseph L. Kirschvink, professor of geobiology; and Colin F. Camerer, Rea A. and Lela G. Axline Professor of Business Economics.

The 2003 class of 187 fellows and 29 foreign honorary members includes four college presidents, three Nobel laureates, and four Pulitzer Prize winners.

Among this year's new fellows and foreign honorary members are Kofi Annan, Secretary-General of the United Nations; journalist Walter Cronkite; philanthropist William H. Gates, Sr., co-chair of the Bill and Melinda Gates Foundation; novelist Michael Cunningham; recording industry pioneer Ray Dolby; artist Cindy Sherman; and Nobel Prize-winning physicist Donald Glaser.

"It gives me great pleasure to welcome these outstanding and influential individuals to the nation's oldest and most illustrious learned society. Election to the American Academy is an honor that acknowledges the best of all scholarly fields and professions. Newly elected fellows are selected through a highly competitive process that recognizes those who have made preeminent contributions to their disciplines," said academy president Patricia Meyer Spacks.

Anson has carried out pioneering work on the electrochemistry of polymers, on the catalysis of electrode reactions, and on electrochemical reactions that involve ultrathin coating of molecules on electrode surfaces.

Kirschvink, who has been honored by students for his excellence in teaching, studies how biological evolution has influenced, and has been influenced by, major events on the surface of the earth. His most significant contributions include the "snowball" earth theory—the theory that the entire Earth may have actually frozen over several times in its history, possibly stimulating evolution. Another original concept concerns the Cambrian evolutionary explosion that he believes may have been precipitated in part by the earth's rotational axis having moved to the equator in a geologically short interval of time.

Camerer's research in experimental and behavioral economics, integrates psychology with economics to explore the impact on decision sciences and game theory. His research uses economics experiments and field studies to understand how people behave when making decisions. Such research is helpful in predicting economic trends and in understanding social policy. Poverty, war, cross-cultural interactions--most social issues are affected by decision psychology.

The total number of Caltech faculty named to the academy is now 82.

The academy was founded in 1780 by John Adams, James Bowdoin, John Hancock, and other scholar-patriots "to cultivate every art and science which may tend to advance the interest, honor, dignity, and happiness of a free, independent, and virtuous people." The academy has elected as fellows and foreign honorary members the finest minds and most influential leaders from each generation, including George Washington and Ben Franklin in the eighteenth century, Daniel Webster and Ralph Waldo Emerson in the nineteenth, and Albert Einstein and Winston Churchill in the twentieth. The current membership includes more than 150 Nobel laureates and 50 Pulitzer Prize winners. Drawing on the wide-ranging expertise of its membership, the academy conducts thoughtful, innovative, non-partisan studies on international security, social policy, education, and the humanities.

A full list of new members is available on the Academy website at http://www.amacad.org/news/new2003.htm.

The academy will welcome this year's new fellows and foreign honorary members at the annual induction ceremony at the academy's headquarters in Cambridge, Mass., in October.

MEDIA CONTACT: Jill Perry, Media Relations Director (626) 395-3226 jperry@caltech.edu

Visit the Caltech media relations web site: http://pr.caltech.edu/media

Exclude from News Hub: 

Six Caltech Professors Awarded Sloan Research Fellowships

PASADENA, Calif.— Six Caltech professors recently received Alfred P. Sloan Research Fellowships for 2003.

The Caltech recipients in the field of chemistry are Paul David Asimow, assistant professor of geology and geochemistry, Linda C. Hsieh-Wilson, Jonas C. Peters, and Brian M. Stoltz, assistant professors of chemistry. In mathematics, a Sloan Fellowship was awarded to Danny Calegari, associate professor of mathematics, and in neuroscience, to Athanassios G. Siapas, assistant professor of computation and neural systems.

Each Sloan Fellow receives a grant of $40,000 for a two-year period. The grants of unrestricted funds are awarded to young researchers in the fields of physics, chemistry, computer science, mathematics, neuroscience, computational and evolutionary molecular biology, and economics. The grants are given to pursue diverse fields of inquiry and research, and to allow young scientists the freedom to establish their own independent research projects at a pivotal stage in their careers. The Sloan Fellows are selected on the basis of "their exceptional promise to contribute to the advancement of knowledge."

From over 500 nominees, a total of 117 young scientists and economists from 50 different colleges and universities in the United States and Canada, including Caltech's six, were selected to receive a Sloan Research Fellowship.

Twenty-eight former Sloan Fellows have received Nobel prizes.

"It is a terrific honor to receive this award and to be a part of such a tremendous tradition of excellence within the Sloan Foundation," said Stoltz. Asimow commented that he will use his Sloan Fellowship to "support further investigation into the presence of trace concentrations of water in the deep earth... I'm pleased because funds that are unattached to any particular grant are enormously useful for seeding new and high-risk projects that are not quite ready to turn into proposals." On his research, Peters said, "The Sloan award will provide invaluable seed money for work we've initiated in the past few months regarding nitrogen reduction using molecular iron systems."

The Alfred P. Sloan Research Fellowship program was established in 1955 by Alfred P. Sloan, Jr., who was the chief executive officer of General Motors for 23 years. Its objective is to encourage research by young scholars at a time in their careers when other support may be difficult to obtain. It is the oldest program of the Alfred P. Sloan Foundation and one of the oldest fellowship programs in the country.

Contact: Deborah Williams-Hedges (626) 395-3227 debwms@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media


Exclude from News Hub: 

Antarctic landmarks named afterCaltech experts on glacier ice flow




For Immediate Release February 28, 2003

Antarctic landmarks named after Caltech experts on glacier ice flow

There aren't too many living individuals who can go to the mall and buy a globe with their name printed on it, but the California Institute of Technology just added two.

Barclay Kamb and Hermann Engelhardt, longtime researchers on the workings of the Antarctic ice streams, have been honored by the American Advisory Committee on Antarctic Names (ACAN) with the renaming of two features near the gigantic Ross Ice Shelf, a Texas-sized mass of floating ice. Hereafter, the feature informally called "ice stream C" will bear the formal name Kamb Ice Stream, and "ice ridge BC" will be formally named the Engelhardt Ice Ridge.

Kamb is the Rawn, Jr., Professor of Geology and Geophysics, Emeritus, at Caltech and is still active in attempting to understand the rapid flow of the Antarctic ice streams and its potential effects on the health of the great ice sheet that covers 98 percent of the Antarctic continent. If the ice sheet were to float rapidly outward into the circum-Antarctic Ocean and melt, the addition of the huge volume of meltwater to the oceans would raise the sea level and have a drastic impact on coastal cities throughout the world.

Engelhardt, a senior research associate in geophysics, emeritus, has collaborated with Kamb for years in the research. They have undertaken a number of expeditions to Antarctica to collect ice-stream data by drilling boreholes down through the ice to the bottom and sending down instruments such as temperature sensors, pressure gauges, ice corers, sediment corers, and borehole video. Previously, they had used these techniques to study surging ("galloping") glaciers in Alaska.

Actually, the news for the Caltech Division of Geological and Planetary Sciences is even better, because two of Kamb's former students were also honored with an Antarctic naming. Ice ridge CD has been formally named the Raymond Ice Ridge after Charlie Raymond, and ice stream F has been named the Echelmeyer Ice Stream after Keith Echelmeyer. Raymond, who earned his doctorate in 1969, is now on the University of Washington faculty; Echelmeyer, who finished his Ph.D. in 1983, is a faculty member at the University of Alaska at Fairbanks.

In announcing the namings on behalf of ACAN, glaciology professor Terry Hughes of the University of Maine said, jokingly, "It looks like Caltech made almost a clean sweep of the ice streams." . The ice streams in Antarctica move through the ice sheet somewhat like an ocean current, such as the Gulf Stream, moves through the ocean. Most of the ice sheet flows a few meters a year, but in those places where ice streams form, the flow of the ice is roughly a hundred times faster, approximately one meter per day. The ice streams are usually about 30 to 50 kilometers wide, 300 to 500 kilometers long, and 1 to 2 kilometers deep.

Why do they move so fast? "That's what we're trying to find out," says Kamb.

After 10 years of study, the researchers have demonstrated that the temperature at the base of the ice streams is at the melting point, whereas it is below freezing at the base of the ice sheet outside the ice streams. The ice streams' basal melting condition allows water pressure to build up under the ice, which tends to lift the ice mass above, and to weaken a layer of glacial sediment (clayey gravel called "till") that underlies the ice streams in a thickness of about one or two meters.

Both of these effects of pressure are capable of increasing flow of the ice streams, which are propelled downslope by gravity, with the soft, weak, till layer acting as a sort of basal "lubricant." The researchers believe that an increase in basal water pressure should result in a marked increase in ice-stream flow, but so far it has not been possible to observe and measure this expected effect in the actual ice streams.

It is believed that friction at the lateral shear margins and at bedrock humps under the ice (also called "sticky spots") prevent the velocity from getting out of control.

"The question is what will happen to the ice streams in the future," says Kamb. "Will they cause a big enough effect on the flow of the ice sheet to contribute appreciably to future sea-level rise? The big issue as to the future behavior of the Antarctic ice sheet is whether it will cause global sea level to rise."

To study the ice streams, Kamb and Engelhardt have made about a dozen National Science Foundation–funded expeditions during the Antarctic summer, in the period from late October to late January. Working in teams of about 13 or 14 people, including Caltech graduate students and support staff from the McMurdo base, the group drills a number of vertical holes, six inches in diameter to the bottom, at a depth of about 1,000 meters. Some of the holes are used to take core samples, while others are used to lower equipment like video cameras to study the character and distribution of till in the basal ice.

The instrumental work has to be completed within about three or four hours after borehole completion, because by that time, the borehole freeze-up process has already progressed to such an extent that the ice "grabs" the equipment still in the hole.

Neither Kamb nor Engelhardt anticipates going again to Antarctica for this particular project, though their studies continue. In fact, some of the equipment purposely left in the bore holes is still sending data.


Contact: Robert Tindol (626) 395-3631 t


Exclude from News Hub: 

The Martian polar caps are almost entirelywater ice, Caltech research shows

For future Martian astronauts, finding a plentiful water supply may be as simple as grabbing an ice pick and getting to work. California Institute of Technology planetary scientists studying new satellite imagery think that the Martian polar ice caps are made almost entirely of water ice—with just a smattering of frozen carbon dioxide, or "dry ice," at the surface.

Reporting in the February 14 issue of the journal Science, Caltech planetary science professor Andy Ingersoll and his graduate student, Shane Byrne, present evidence that the decades-old model of the polar caps being made of dry ice is in error. The model dates back to 1966, when the first Mars spacecraft determined that the Martian atmosphere was largely carbon dioxide.

Scientists at the time argued that the ice caps themselves were solid dry ice and that the caps regulate the atmospheric pressure by evaporation and condensation. Later observations by the Viking spacecraft showed that the north polar cap contained water ice underneath its dry ice covering, but experts continued to believe that the south polar cap was made of dry ice.

However, recent high-resolution and thermal images from the Mars Global Surveyor and Mars Odyssey, respectively, show that the old model could not be accurate. The high-resolution images show flat-floored, circular pits eight meters deep and 200 to 1,000 meters in diameter at the south polar cap, and an outward growth rate of about one to three meters per year. Further, new infrared measurements from the newly arrived Mars Odyssey show that the lower material heats up, as water ice is expected to do in the Martian summer, and that the polar cap is too warm to be dry ice.

Based on this evidence, Byrne (the lead author) and Ingersoll conclude that the pitted layer is dry ice, but the material below, which makes up the floors of the pits and the bulk of the polar cap, is water ice.

This shows that the south polar cap is actually similar to the north pole, which was determined, on the basis of Viking data, to lose its one-meter covering of dry ice each summer, exposing the water ice underneath. The new results show that the difference between the two poles is that the south pole dry-ice cover is slightly thicker—about eight meters—and does not disappear entirely during the summertime.

Although the results show that future astronauts may not be obliged to haul their own water to the Red Planet, the news is paradoxically negative for the visionary plans often voiced for "terraforming" Mars in the distant future, Ingersoll says.

"Mars has all these flood and river channels, so one theory is that the planet was once warm and wet," Ingersoll says, explaining that a large amount of carbon dioxide in the atmosphere is thought to be the logical way to have a "greenhouse effect" that captures enough solar energy for liquid water to exist.

"If you wanted to make Mars warm and wet again, you'd need carbon dioxide, but there isn't nearly enough if the polar caps are made of water," Ingersoll adds. "Of course, terraforming Mars is wild stuff and is way in the future; but even then, there's the question of whether you'd have more than a tiny fraction of the carbon dioxide you'd need."

This is because the total mass of dry ice is only a few percent of the atmosphere's mass and thus is a poor regulator of atmospheric pressure, since it gets "used up" during warmer climates. For example, when Mars's spin axis is tipped closer to its orbit plane, which is analogous to a warm interglacial period on Earth, the dry ice evaporates entirely, but the atmospheric pressure remains almost unchanged.

The findings present a new scientific mystery to those who thought they had a good idea of how the atmospheres of the inner planets compared to each other. Planetary scientists have assumed that Earth, Venus, and Mars are similar in the total carbon dioxide content, with Earth having most of its carbon dioxide locked up in marine carbonates and Venus's carbon dioxide being in the atmosphere and causing the runaway greenhouse effect. By contrast, the eight-meter layer on the south polar ice cap on Mars means the planet has only a small fraction of the carbon dioxide found on Earth and Venus.

The new findings further pose the question of how Mars could have been warm and wet to begin with. Working backward, one would assume that there was once a sufficient amount of carbon dioxide in the atmosphere to trap enough solar energy to warm the planet, but there's simply not enough carbon dioxide for this to clearly have been the case.

"There could be other explanations," Byrne says. "It could be that Mars was a cold, wet planet; or it could be that the subterranean plumbing would allow for liquid water to be sealed off underneath the surface."

In one such scenario, perhaps the water flowed underneath a layer of ice and formed the channels and other erosion features. Then, perhaps, the ice sublimated away, to be eventually redeposited at the poles.

At any rate, Ingersoll and Byrne say that finding the missing carbon dioxide, or accounting for its absence, is now a major goal of Mars research.

Contact: Robert Tindol (626) 395-3631



Clouds discovered on Saturn's moon Titan

Teams of astronomers at the California Institute of Technology and at the University of California, Berkeley, have discovered methane clouds near the south pole of Titan, resolving a fierce debate about whether clouds exist amid the haze of the moon's atmosphere.

The new observations were made using the W. M. Keck II 10-meter and the Gemini North 8-meter telescopes atop Hawaii's Mauna Kea volcano in December 2001. Both telescopes are outfitted with adaptive optics that provide unprecedented detail of features not seen even by the Voyager spacecraft during its flyby of Saturn and Titan.

The results are being published by the Caltech team in the December 19 issue of Nature and by the UC Berkeley and NASA Ames team in the December 20 issue of the Astrophysical Journal.

Titan is Saturn's largest moon, larger than the planet Mercury, and is the only moon in our solar system with a thick atmosphere. Like Earth's atmosphere, the atmosphere on Titan is mostly nitrogen. Unlike Earth, Titan is inhospitable to life due to the lack of atmospheric oxygen and its extremely cold surface temperatures (-183 degrees Celsius, or -297 degrees Fahrenheit). Along with nitrogen, Titan's atmosphere contains a significant amount of methane.

Earlier spectroscopic observations hinted at the existence of clouds on Titan, but gave no clue as to their location. These early data were hotly debated, since Voyager spacecraft measurements of Titan appeared to show a calm and cloud-free atmosphere. Furthermore, previous images of Titan had failed to reveal clouds, finding only unchanging surface markings and very gradual seasonal changes in the haziness of the atmosphere.

Improvements in the resolution and sensitivity achievable with ground-based telescopes led to the present discovery. The observations used adaptive optics, in which a flexible mirror rapidly compensates for the distortions caused by turbulence in Earth's atmosphere. These distortions are what cause the well-known twinkling of the stars. Using adaptive optics, details as small as 300 kilometers across can be distinguished at the enormous distance of Titan (1.3 billion kilometers), equivalent of reading an automobile license plate from 100 kilometers away.

The images presented by the two teams clearly show bright clouds near Titan's south pole.

"We see the intensity of the clouds varying over as little as a few hours," said post-doctoral fellow Henry Roe, lead author for the UC Berkeley group. "The clouds are constantly changing, although some persist for as long as a few days."

Titan experiences seasons much like Earth, though its year is 30 times longer due to Saturn's distant orbit from the sun. Titan is currently in the midst of southern summer, and the south pole has been in continuous sunlight for over six Earth years. The researchers believe that this fact may explain the location of the large clouds.

"These clouds appear to be similar to summer thunderstorms on Earth, but formed of methane rather than water. This is the first time we have found such a close analogy to the Earth's atmospheric water cycle in the solar system," says Antonin Bouchez, one of the Caltech researchers.

In addition to the clouds above Titan's south pole, the Keck images, like previous data, reveal the bright continent-sized feature that may be a large icy highland on Titan's surface, surrounded by linked dark regions that are possibly ethane seas or tar-covered lowlands.

"These are the most spectacular images of Titan's surface which we've seen to date," says Michael Brown, associate professor of planetary astronomy and lead author of the Caltech paper. "They are so detailed that we can almost begin to speculate about Titan's geology, if only we knew for certain what the bright and dark regions represented."

In 2004, Titan will be visited by NASA's Cassini spacecraft, which will look for clouds on Titan during its multiyear mission around Saturn. "Changes in the spatial distribution of these clouds over the next Titan season will help pin down their detailed formation process," says Imke de Pater, professor of astronomy at UC Berkeley. The Cassini mission includes a probe named Huygens that will descend by parachute into Titan's atmosphere and land on the surface near the edge of the bright continent.

The team conducting the Gemini observations consists of Roe and de Pater from UC Berkeley, Bruce A. Macintosh of Lawrence Livermore National Laboratory, and Christopher P. McKay of the NASA Ames Research Center. The team reporting results from the Keck telescope consists of Brown and Bouchez of Caltech and Caitlin A. Griffith of the University of Arizona.

The Gemini observatory is operated by the Association of Universities for Research in Astronomy under a cooperative agreement with the National Science Foundation, involving NOAO/AURA/NSF as the U.S. partner. The W.M. Keck Observatory is operated by the California Association for Research in Astronomy, a scientific partnership between the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. This research has been funded in part by grants from NSF and NASA.

Contact: Robert Tindol (626) 395-3631



Subscribe to RSS - GPS