Caltech mineralogy professor George Rossman wins Feynman Prize for teaching excellence

PASADENA, Calif.—Ever wonder what becomes of the type of kid who keeps a rock collection?

For some of the brighter ones, at least, the answer is that they still collect rocks and minerals under the tutelage of award-winning California Institute of Technology professor George Rossman. As the member of the geology faculty most directly involved in mineralogy research, Rossman teaches a popular class at Caltech on the subject, and many of his former students are now prominent mineralogists themselves.

Rossman has been named this year's recipient of Caltech's most prestigious teaching honor, the Feynman Prize. The award, given to an outstanding faculty member each year, recognizes "exceptional ability, creativity, and innovation in both laboratory and classroom instruction." Named in honor of legendary Caltech physics professor and Nobel Laureate Richard Feynman, the prize is made possible by the generosity of an endowment from Ione and Robert E. Paradise, along with additional contributions from Mr. and Mrs. William H. Hurt.

Rossman won the award based on significant input from current and former students. Among the highlights of those comments is that he "is probably the best, clearest, and most exciting teacher I have ever had," that he "is such a great lecturer that he can make the class and each mineral very funny," and that he "is probably the best professor at Caltech."

For Rossman's part, he rather modestly says that minerals are inherently interesting subject matter for the classroom. "Students relate to tangible, visible items," he says, and the specimens sitting on the floor behind his desk easily make his point. One item, for example, is a rather large conglomeration of rock and minerals called pegmatite. Found in San Diego County, the rock contains minerals such as mica, tourmaline, and quartz.

"For me, the minerals are a beautiful entry into the science, because the beautiful colors and shapes are always due to underlying scientific principles," he says. "Nature has the ability to bring together a large number of the elements of the periodic table, and combine them under different pressure and temperature conditions for some really spectacular results."

The practical results are more widespread than one might assume. Synthetic minerals are found in a number of high-tech electronic devices these days, and applications include quartz oscillators, emerald and ruby lasers and such, and the field of mineralogy laps over into a variety of disciplines, including chemistry, solid-state physics, materials science, industrial technology, environmental science, biology, and planetary science.

In fact, the young science student who hopes to study other worlds some day--or perhaps even go to some of them--might do well to study geology and mineralogy. "We presume that the physical principles we learn on Earth are applicable to Mars and other planets," Rossman says. "In fact, they should be applicable in other solar systems."

Rossman's research interests involve the study of how electromagnetic radiation interacts with minerals. His lab's work concentrates on the visible and infrared, but past research has involved pretty much every other region of the electromagnetic spectrum.

"Our goals include understanding at a very basic level the nature of the interaction--in other words, how we can use photons to study minerals," he says. "We've developed a variety of analytical protocols, and I suppose one of our most recent successes has been in learning that the hydrogen content of nominally anhydrous minerals constitutes an important global reservoir that is capable of holding much of the world's water."

As is typically the case in the Caltech labs, students find more than enough opportunity to immerse themselves in the research. Rossman's lab includes staff scientists and postdoctoral researchers, but graduate students and undergraduates are welcome members of the team. One of Rossman's recent graduate students, Elizabeth Johnson, is now a researcher at the Smithsonian Institution in Washington, D.C. An undergraduate now coming on board is currently a sophomore.

Writer: 
Robert Tindol
Writer: 
Exclude from News Hub: 
No

Planetary scientists find planetoid in Kuiper Belt; could be biggest yet discovered

PASADENA, Calif.—Planetary scientists at the California Institute of Technology and Yale University on Tuesday night discovered a new planetoid in the outer fringes of the solar system.

The planetoid, currently known only as 2004 DW, could be even larger than Quaoar--the current record holder in the area known as the Kuiper Belt--and is some 4.4 billion miles from Earth.

According to the discoverers, Caltech associate professor of planetary astronomy Mike Brown and his colleagues Chad Trujillo (now at the Gemini North observatory in Hawaii), and David Rabinowitz of Yale University, the planetoid was found as part of the same search program that discovered Quaoar in late 2002. The astronomers use the 48-inch Samuel Oschin Telescope at Palomar Observatory and the recently installed QUEST CCD camera built by a consortium including Yale and the University of Indiana, to systematically study different regions of the sky each night.

Unlike Quaoar, the new planetoid hasn't yet been pinpointed on old photographic plates or other images. Because its orbit is therefore not well understood yet, it cannot be given an official name.

"So far we only have a one-day orbit," said Brown, explaining that the data covers only a tiny fraction of the orbit the object follows in its more than 300-year trip around the sun. "From that we know only how far away it is and how its orbit is tilted relative to the planets."

The tilt that Brown has measured is an astonishingly large 20 degrees, larger even than that of Pluto, which has an orbital inclination of 17 degrees and is an anomaly among the otherwise planar planets.

The size of 2004 DW is not yet certain; Brown estimates a size of about 1,400 kilometers, based on a comparison of the planetoid's luminosity with that of Quaoar. Because the distance of the object can already be calculated, its luminosity should be a good indicator of its size relative to Quaoar, provided the two objects have the same albedo, or reflectivity.

Quaoar is known to have an albedo of about 10 percent, which is slightly higher than the reflectivity of our own moon. Thus, if the new object is similar, the 1,400-kilometer estimate should hold. If its albedo is lower, then it could actually be somewhat larger; or if higher, smaller.

According to Brown, scientists know little about the albedos of objects this large this far away, so the true size is quite uncertain. Researchers could best make size measurements with the Hubble Space Telescope or the newer Spitzer Space Telescope. The continued discovery of massive planetoids on the outer fringe of the solar system is further evidence that objects even farther and even larger are lurking out there. "It's now only a matter of time before something is going to be discovered out there that will change our entire view of the outer solar system," Brown says.

The team is working hard to uncover new information about the planetoid, which they will release as it becomes available, Brown adds. Other telescopes will also be used to better characterize the planetoid's features.

Further information is at the following Web site: http://www.gps.caltech.edu/~chad/2004dw

Writer: 
Robert Tindol
Writer: 

Caltech geophysicists gain new insights on Earth's core–mantle boundary

Earth's core–mantle boundary is a place none of us will ever go, but researchers using a special high-velocity cannon have produced results showing there may be molten rock at this interface at about 1,800 miles. Further, this molten rock may have rested peacefully at the core-mantle boundary for eons.

In a presentation at the fall meeting of the American Geophysical Union (AGU) today, California Institute of Technology geophysics professor Tom Ahrens reports new measurements of the density and temperature of magnesium silicate--the stuff found in Earth's interior--when it is subjected to the conditions that exist at the planet's core-mantle boundary.

The Caltech team did their work in the institute's shock wave laboratory, where an 80-foot light-gas gun is specially prepared to fire one-ounce tantalum-faced plastic bullets at mineral samples at speeds up to 220 thousand feet per second--about a hundred times faster than a bullet fired from a conventional rifle. The 30-ton apparatus uses compressed hydrogen as a propellant, and the resulting impact replicates the 1.35 million atmospheres of pressure and the 8,500 degrees Fahrenheit temperature that exist at the core–mantle boundary.

The measurements were conducted using natural, transparent, semiprecious gem crystals of enstatite from Sri Lanka, as well as synthetic glass of the same composition. Upon compression, these materials transform to a 30–percent denser structure called perovskite, which also dominates Earth's lower mantle at depths from 415 miles to the core–mantle boundary.

According to Ahrens, the results "have significant implications for understanding the core–mantle boundary region in the Earth's interior, the interface between rocky mantle and metallic core." The report represents the work of Ahrens and assistant professor of geology and geochemistry Paul Asimow, along with graduate students Joseph Akins and Shengnian Luo.

The researchers demonstrated by two independent experimental methods that the major mineral of Earth's lower mantle, magnesium silicate in the perovskite structure, melts at the pressure of the core–mantle boundary to produce a liquid whose density is greater than or equal to the mineral itself. This implies that a layer of partially molten mantle would be gravitationally stable over geologic times at the boundary, where seismologists have discovered anomalous features best explained by the presence of partial melt.

Two types of experiments were conducted: pressure-density experiments and shock temperature measurements. In the pressure-density experiments, the velocity of the projectile prior to impact and the velocity of the shock wave passing through the target after impact are measured using high-speed optical and x-ray photography. These measurements allow calculation of the pressure and density of the shocked target material. In shock temperature measurements, thermal emission from the shocked sample at visible and near-infrared wavelengths is monitored with a six-channel pyrometer, and the brightness and spectral shape are converted to temperature.

In both types of experiments, the shock wave takes about one ten-millionth of a second to pass through the dime-sized sample, and the velocity and optical emission measurements must resolve this extremely short duration event.

The pressure-density experiments yielded a surprising result. When the glass starting material is subjected to increasingly strong shocks, densities are at first consistent with the perovskite structure, and then a transition is made to a melt phase at a pressure of 1.1 million atmospheres. As expected for most materials under ordinary conditions, the melt phase is less dense than the solid. Shock compression of the crystal starting material, however, follows a lower temperature path, and the transition from perovskite shock states to molten shock states does not occur until a pressure of 1.7 million atmospheres is reached. At this pressure, the liquid appears to be 3 to 4 percent denser than the mineral. Like water and ice at ordinary pressure and 32 °F, under these high-pressure conditions the perovskite solid would float and the liquid would sink.

Just as the negative volume change on the melting of water ice is associated with a negative slope of the melting curve in pressure-temperature space (which is why ice-skating works-- the pressure of the skate blade transforms ice to water at a temperature below the ordinary freezing point), this result implies that the melting curve of perovskite should display a maximum temperature somewhere between 1.1 and 1.7 million atmospheres, and a negative slope at 1.7 million atmospheres. This implication of the pressure-density results was tested using shock temperature measurements. In a separate series of experiments on the same starting materials, analysis of the emitted light constrained the melting temperature at 1.1 million atmospheres to about 9,900 °F. However, at the higher pressure of 1.7 million atmospheres, the melting point is 8,500o F. This confirms that somewhere above 1.1 million atmospheres, the melting temperature begins to decrease with increasing pressure and the melting curve has a negative slope.

Taking the results of both the pressure-density and shock temperature experiments together confirms that the molten material may be neutrally or slightly negatively buoyant at the pressure of the base of the mantle, which is 1.35 million atmospheres. Molten perovskite would, however, still be much less dense than the molten iron alloy of the core. If the mantle were to melt near the core–mantle boundary, the liquid silicate could be gravitationally stable in place or could drain downwards and pond immediately above the core–mantle boundary. The work has been motivated by the 1995 discovery of ultralow velocity zones at the base of the Earth's mantle by Donald Helmberger, who is the Smits Family Professor of Geophysics and Planetary Science at Caltech, and Edward Garnero, who was then a Caltech graduate student and is now a professor at Arizona State University. These ultralow velocity zones (notably underneath the mid-Pacific region) appear to be 1-to-30-mile-thick layers of very low-seismic-velocity rock just above the interface between Earth's rocky mantle and the liquid core of the Earth, at a depth of 1,800 miles.

Helmberger and Garnero showed that, in this zone, seismic shear waves suffer a 30 percent decrease in velocity, whereas compressional wave speeds decrease by only 10 percent. This behavior is widely attributed to the presence of some molten material. Initially, many researchers assumed that this partially molten zone might represent atypical mantle compositions, such as a concentration of iron-bearing silicates or oxides with a lower melting point than ordinary mantle--about 7,200 oF at this pressure.

The new results, however, indicate that the melting temperature of normal mantle composition is low enough to explain melting in the ultralow velocity zones, and that this melt could coexist with residual magnesium silicate perovskite solids. Thus the new Caltech results indicate that no special composition is required to induce an ultralow velocity zone just above the core–mantle boundary or to allow it to remain there without draining away. The patchiness of the ultralow velocity zones suggests that Earth's lowermost mantle temperatures can be just hotter than, or just cooler than, the temperature that is required to initiate melting of normal mantle at a depth of 1,800 miles.

Writer: 
Robert Tindol
Writer: 

Atmospheric scientists still acquire samples the old-fashioned way--by flying up and getting them

PASADENA, Calif.—Just as Ishmael always returned to the high seas for whales after spending time on land, an atmospheric researcher always returns to the air for new data.

All scientific disciplines depend on the direct collection of data on natural phenomena to one extent or another. But atmospheric scientists still find it especially important to do some empirical data-gathering, and the best way to get what they need is by taking up a plane and more or less opening a window.

At the California Institute of Technology, where atmospheric science is a major interest involving researchers in several disciplines, the collection of data is considered important enough to justify the maintenance of a specially equipped plane dedicated to the purpose. In addition to the low-altitude plane, several Caltech researchers who need higher-altitude data are also heavy users of the jet aircraft maintained by NASA for its Airborne Science Program--a longstanding but relatively unsung initiative with aircraft based at the Dryden Flight Research Center in California's Mojave Desert.

"The best thing about using aircraft instead of balloons is that you are assured of getting your instruments back in working order," says Paul Wennberg, professor of atmospheric chemistry and environmental engineering science. Wennberg, whose work has been often cited in policy debates about the human impact on the ozone layer, often relies on the NASA suborbital platforms (i.e., various piloted and drone aircraft operating at mid to high altitudes) to collect his data.

Wennberg's experiments typically ride on the high-flying ER-2, which is a revamped reconnaissance U-2. The plane has room for the pilot only, which means that the experimental equipment has to be hands-free and independent of constant technical attention. Recently, Wennberg's group has made measurements from a reconfigured DC-8 that has room for some 30 passengers, depending on the scientific payload, but the operating ceiling is some tens of thousands of feet lower than that of the ER-2.

"The airplane program has been the king for NASA in terms of discoveries," Wennberg says. "Atmospheric science, and certainly atmospheric chemistry, is still very much an observational field. The discoveries we've made have not been by modeling, but by consistent surprise when we've taken up instruments and collected measurements."

In his field of atmospheric chemistry, Wennberg says the three foundations are laboratory work, synthesis and modeling, and observational data--the latter being still the most important.

"You might have hoped we'd be at the place where we could go to the field as a confirmation of what we did back in the lab or with computer programs, but that's not true. We go to the field and see things we don't understand."

Wennberg sometimes worries about the public perception of the value of the Airborne Science Program because the launching of a conventional jet aircraft is by no means as glamorous or romantic as the blasting off of a rocket from Cape Canaveral. By contrast, his own data-collection would appear to most as bread-and-butter work involving a few tried-and-true jet airplanes.

"If you hear that the program uses 'old technology,' this refers to the planes themselves and not the instruments, which are state-of-the-art," he says. "The platforms may be old, but it's really a vacuous argument to say that the program is in any way old.

"I would argue that the NASA program is a very cost-effective way to go just about anywhere on Earth and get data."

Chris Miller, who is a mission manager for the Airborne Science Program at the Dryden Flight Research Center, can attest to the range and abilities of the DC-8 by merely pointing to his control station behind the pilot's cabin. On his wall are mounted literally dozens of travel stick-ons from places around the world where the DC-8 passengers have done research. Included are mementos from Hong Kong, Singapore, New Zealand, Australia, Japan, Thailand, and Greenland, to name a few.

"In addition to atmospheric chemistry, we also collect data for Earth imaging, oceanography, agriculture, disaster preparedness, and archaeology," says Miller. "There can be anywhere from two or three to 15 experiments on a plane, and each experiment can be one rack of equipment to half a dozen."

Wennberg and colleagues Fred Eisele of the National Center for Atmospheric Research and Rick Flagan, who is McCollum Professor of Chemical Engineering, have developed special instrumentation to ride on the ER-2. One of their new instruments is a selected-ion- chemical ionization mass spectrometer, which is used to study the composition of the atmospheric aerosols and the mechanisms that lead to its production.

Caltech's Nohl Professor and professor of chemical engineering, John Seinfeld, conducts an aircraft program that is a bit more down-to-earth, at least in the literal sense.

Seinfeld is considered perhaps the world's leading authority on atmospheric particles or so-called aerosols--that is, all the stuff in the air like sulfur compounds and various other pollutants not classifiable as a gas. Seinfeld and his associates study primarily atmospheric particles, their size, their composition, their optical properties, their effect on solar radiation, their effect on cloud formation, and ultimately their effect on Earth's climate.

"Professor Rick Flagan and I have been involved for a number of years in an aircraft program largely funded by the Office of Naval Research, and established jointly with the Naval Postgraduate School in Monterey. The joint program was given the acronym CIRPAS," says Seinfeld, explaining that CIRPAS, the Center for Interdisciplinary Remotely Piloted Aircraft Studies, acknowledges the Navy's interest in making certain types of environmental research amenable for drone aircraft like the Predator.

"The Twin Otter is our principal aircraft, and it's very rugged and dependable," he adds. "It's the size of a small commuter aircraft, and it's mind-boggling how much instrumentation we can pack in this relatively small aircraft."

Caltech scientists used the plane in July to study the effects of particles on the marine strata off the California coast, and the plane has also been to the Canary Islands, Japan, Key West, Florida, and other places. In fact, the Twin Otter can essentially be taken anywhere in the world.

One hot area of research these days, pardon the term, is the interaction of particulate pollution with radiation from the sun. This is important for climate research, because, if one looks down from a high-flying jet on a smoggy day, it becomes clear that a lot of sunlight is bouncing back and never reaching the ground. Changing atmospheric conditions therefore affect Earth's heat balance.

"If you change properties of clouds, then you change the climatic conditions on Earth," Seinfeld says. "Clouds are a major component in the planet's energy balance."

Unlike the ER-2, in which instrumentation must be contained in a small space, the Twin Otter can accommodate onboard mass spectrometers and such for onboard direct logging and analysis of data. The data are streamed to the ground in real time, which means that the scientists can sit in the hangar and watch the data come in. Seinfeld himself is one of those on the ground, leaving the two scientist seats in the plane to those whose instruments may require in-flight attention.

"We typically fly below 10,000 feet because the plane is not pressurized. Most of the phenomena we want to study occur below this altitude," he says.

John Eiler, associate professor of geochemistry, is another user of the NASA Airborne Research Program, particularly the air samples returned by the ER-2. Eiler is especially interested these days in the global hydrogen budget, and how a hydrogen-fueled transportation infrastructure could someday impact the environment.

Eiler and Caltech professor of planetary science Yuk Yung, along with lead author Tracey Tromp and several others, issued a paper on the hydrogen economy in June that quickly became one of the most controversial Caltech research projects in recent memory. Using mathematical modeling, the group showed that the inevitable leakage of hydrogen in a hydrogen-fueled economy could impact the ozone layer.

More recently Eiler and another group of collaborators, using samples returned by the ER-2 and subject to mass spectroscopy, have reported further details on how hydrogen could impact the environment. Specifically, they capitalized on the ER-2's high-altitude capabilities to collect air samples in the only region of Earth where's it's simple and straightforward to infer the precise cascade of reactions involving hydrogen and methane.

Though it seems contradictory, the Eiler team's conclusion from stratospheric research was that the hydrogen-eating microbes in soils can take care of at least some of the hydrogen leaked by human activity.

"This study was made possible by data collection," Eiler says. "So it's still the case in atmospheric chemistry that there's no substitute for going up and getting samples."

Writer: 
RT
Writer: 
Exclude from News Hub: 
No

Old Caltech Telescope Yields New Science

PASADENA, Calif. - Meet Sarah Horst, throwback. The planetary science major, a senior at the California Institute of Technology, spent six months engaged in a bit of old-time telescope observing. The work led to some breakthrough research about Saturn's moon Titan, and indirectly led to funding for a new telescope at Caltech's Palomar Observatory.

Horst, 21, was looking for a part-time job in the summer of her sophomore year, and was hired by Mike Brown, an associate professor of planetary astronomy. Brown and graduate student Antonin Bouchez knew there had been previous evidence of "weather" on Titan in the form of clouds. But that evidence was elusive. "Someone would look one year and think they saw a cloud, then look the next year and not see a cloud," explains Brown. "What we were after was a way to look at Titan, night after night after night."

The problem, of course, is that all of the large telescopes like Keck are incredibly busy, booked by astronomers from around the world who use the precious time for their own line of research. So Brown and Bouchez knew that obtaining large amounts of time for a single project like this was not going to happen.

The solution: Use an old teaching telescope--the hoary 14-inch Celestron telescope located on top of Caltech's Robinson Lab--to do cutting edge science that couldn't be done at the largest telescopes in the world, in Hawaii.

Though the power of the Robinson telescope is weak, and light pollution from Pasadena strong, which prevents imaging the actual clouds, the light reflecting from clouds could be imaged (the more clouds, the more light that's reflected). All that was needed was someone who could come night after night and take multiple images.

Enter Horst, the self-described "lowly undergraduate." For months, Horst spent her evenings in Robinson. "I did the setup, which involved a wheel that contained four light filters," she explains. Each filter would capture a different wavelength of light. Software switched the filters; all she had to do, says Horst, was to orientate and focus the telescope.

Now, modern-day astronomers have it relatively easy when using their telescope time. Sure they're up all night, but they sit on a comfortable chair in a warm room, hot coffee close at hand, and do their observing through a computer monitor that's connected to a telescope.

Not Horst. She did it the old way, in discomfort. "A lot of times in December or January I'd go in late at night, and it would be freezing," says Horst, who runs the 800-meter for the Caltech track team. "I'd wrap myself up in blankets." Horst spent hours in the dark, since the old dome itself had to be dark. "I couldn't even study," she says, "although sometimes I tried to read by the light of the moon."

A software program written by Bouchez plotted the light intensity from each image on a graph. When a particular image looked promising, Bouchez contacted Brown. As a frequent user of the Keck Observatory, which is powerful enough to take an image of the actual clouds, Brown was able to call colleagues who were using the Keck that night and quickly convince them that something exciting was going on. "It only took about ten minutes to get a quick image of Titan," says Brown. "The funny part was having to explain to them that we knew there were clouds because we had seen the evidence in our 14-inch telescope in the middle of the L.A. basin."

The result was "Direct Detection of Variable Tropospheric Clouds Near Titan's South Pole," which appeared in the December 19 journal Nature. It included this acknowledgement: "We thank . . . S. Horst for many nights of monitoring Titan in the cold."

The paper has helped Brown obtain the funding to build a new 24-inch custom-built telescope. It will be placed in its own building atop Palomar Mountain, on the grounds of Caltech's existing observatory. It's also roboticized; Brown will control the scope from Pasadena via a computer program he has written.

He'll use it for further observation of Titan and for other imaging, as well, such as fast-moving comets. "Most astronomy is big," notes Brown; "big scopes looking at big, unchanging things, like galaxies. I like to look at changing things, which led to this telescope."

What really made this project unique, though, according to Brown, is the Robinson scope. "Sarah was able to do something with this little telescope in Pasadena that no one in the world, on any of their larger professional telescopes on high, dark mountaintops, had been able to do," he says. "Sometimes a good idea and stubbornness are better than the largest telescope in town."

For Horst, while the work wasn't intellectually challenging--"a trained monkey could have done it," she says with a laugh--it was, nonetheless, "a cool project. Everything here is so theoretical and tedious, and so classroom orientated. So in that way it was a nice experience and reminded me what real science was about."

Writer: 
MW
Exclude from News Hub: 
No

Atmospheric researchers present new findingson the natural hydrogen cycle

Two months after a pivotal study on the potential impact of a future hydrogen economy on the environment, further evidence is emerging on what would happen to new quantities of hydrogen released into the atmosphere through human activity.

In an article appearing in the August 21 issue of the journal Nature, a group of researchers from the California Institute of Technology and other institutions reports results of a study of the atmospheric chemical reactions that produce and destroy molecular hydrogen in the stratosphere. Based on these results, the report concludes that most of the hydrogen eliminated from the atmosphere goes into the ground, and therefore that the scientific community will need to turn its focus toward soil destruction of hydrogen in order to accurately predict whether human emissions will accumulate in the air. The researchers reached this conclusion through careful measurement of the abundance of a rare isotope of hydrogen known as deuterium. It has long been known that atmospheric molecular hydrogen is anomalously rich in deuterium, but it was unclear why. The only reasonable explanation seemed to be that atmospheric hydrogen is mostly destroyed by chemical reactions in the air, and that those reactions are relatively slow for deuterium-rich hydrogen, so it accumulates like salt in an evaporating pan of water.

If correct, this would mean that oxidizing atmospheric trace gases control the natural hydrogen cycle and that soils are relatively unimportant. The Caltech group discovered that one of the main natural sources of atmospheric hydrogen--the breakdown of methane--is actually responsible for the atmosphere's enrichment in deuterium. This result implies that reactions with atmospheric oxidants are relatively unimportant to the hydrogen cycle, and that uptake by soils is really in the driver's seat.

This issue is important because of the potential for a future hydrogen economy to leak hydrogen into the air--a scenario explored in the earlier study published in Science. Such leaks of hydrogen seem likely at present, and if they occur must either be mitigated by some natural processes that destroy hydrogen, or else the leaked hydrogen will accumulate in the atmosphere. If the latter, this hydrogen would inevitably find its way into the stratosphere and participate in chemical reactions that damage the ozone layer. The key to predicting how this chain of events will unfold is knowing what natural processes destroy hydrogen, and to what extent they might counteract increases in human emissions.

Hydrogen is a highly reactive element, but the question of when and where it reacts, and under what circumstances, is difficult to know precisely. This question is simplified in the stratosphere, where it's easier to single out and understand specific reactions. According to John Eiler, an assistant professor of geochemistry at the California Institute of Technology and an author of both the new paper and the June paper in Science, the new data were gathered from air samples gathered in the stratosphere with one of the high-flying ER-2 planes operated by the NASA Dryden Flight Research Center in the Mojave Desert.

The ER-2, a reconfigured U-2 spy plane, is part of NASA's Airborne Research Program and is crucial to atmospheric chemists interested in directly collecting stratospheric samples for air-quality research. The air samples that were collected in the ER-2 in various locales show that there is an extreme enrichment of deuterium in stratospheric hydrogen.

"We wanted to look at hydrogen in the stratosphere because it's easy to study the production of hydrogen from methane separate from other influences," Eiler explains. "It may seem odd to go to the stratosphere to understand what's happening in the ground, but this was the best way to get a global perspective on the importance of soils to the hydrogen cycle."

With precise information on the deuterium content of hydrogen formed from methane, the researchers were able to calculate that the soil uptake of hydrogen is as high as 80 percent. It is suspected that this hydrogen is used by soil-living microbes to carry on their biological functions, although the details of this process are poorly understood and have been the subject of only a few previous studies.

It seems likely that the hydrogen taken up by soils is relatively free of environmental consequences, but the question still remains how much more hydrogen the soil can consume. If future use of hydrogen in transportation results in a significant amount of leakage, then soil uptake must increase dramatically or it will be inadequate to cleanse the released hydrogen from the atmosphere, Eiler says.

"An analogy would be the discovery that trees and other plants get rid of some of the carbon dioxide that cars emit, but by no means all of it," he says. "So the question as we look toward a future hydrogen economy is whether the microbes will be able to eat the hydrogen fast enough."

The research was funded in part by the National Science Foundation. Bruce Doddridge, program director in the NSF's division of atmospheric science, said, "This carefully conducted research investigating the natural chemistry of sources and sinks affecting the abundance of molecular hydrogen in the troposphere results in the most accurate information to date, and appears to account for the tropospheric deuterium excess previously observed.

"A more accurate molecular hydrogen budget may have important implications as global fuel technology shifts its focus from fossil fuels to other sources," Doddridge added.

The lead author of the paper is Thom Rahn, a former postdoctoral scholar of Eiler's who is now affiliated with Los Alamos National Laboratory. The other authors are Paul Wennberg, a professor of atmospheric chemistry and environmental engineering science at Caltech; Kristie A. Boering and Michael McCarthy, both of UC Berkeley; Stanley Tyler of UC Irvine; and Sue Schauffler of the National Center for Atmospheric Research in Boulder, Colorado.

In addition to the NSF, other supporters of the research were the Davidow Fund and General Motors Corp., the David and Lucile Packard Foundation, the NASA Upper Atmosphere Research Program, and the National Center for Atmospheric Research.

Writer: 
Robert Tindol
Writer: 

Gravity Variations Predict Earthquake Behavior

PASADENA, Calif. — In trying to predict where earthquakes will occur, few people would think to look at Earth's gravity field. What does the force that causes objects to fall to the ground and the moon to orbit around the earth have to do with the unpredictable ground trembling of an earthquake?

Now, researchers at the California Institute of Technology have found that within subduction zones, the regions where one of the earth's plates slips below another, areas where the attraction due to gravity is relatively high are less likely to experience large earthquakes than areas where the gravitational force is relatively low.

The study, by Caltech graduate student Teh-Ru Alex Song and Associate Professor of Geophysics Mark Simons, will appear in the August 1 issue of the journal Science.

Until now, says Simons, researchers studying earthquake behavior generally took one of four approaches: 1) analyzing seismograms generated by earthquakes, 2) studying frictional properties of various types of rock in the laboratory or in the field, 3) measuring the slow accumulation of strain between earthquakes with survey techniques, and 4) large-scale dynamic models of earthquakes and tectonics.

Instead of using one of these approaches, Song and Simons considered variations in the gravity field as a predictor of seismic behavior.

A gravity anomaly occurs when gravity is stronger or weaker than the regional average. For example, a mountain or an especially dense rock would tend to increase the nearby gravity field, creating a positive anomaly. Likewise, a valley would tend to create a negative anomaly.

Song and Simons examined existing data from satellite-derived observations of the gravity field in subduction zones. Comparing variations in gravity along the trenches with earthquake data from two different catalogs going back 100 years, the team found that, within a given subduction zone, areas with negative gravity anomalies correlated with increased large earthquake activity. Areas with relatively high gravity anomalies experienced fewer large earthquakes.

In addition, most of the energy released in earthquakes was in areas of low gravity. The team looked at subduction zone earthquakes with magnitude greater than 7.5 since 1976. They found that of the total energy released in those earthquakes, 44 percent came from regions with the most strongly negative gravity anomalies, though these regions made up only 14 percent of the total area.

Song and Simons also compared the location of large earthquakes with the topography of the subduction zones, finding that areas of low topography (such as basins) also corresponded well to areas with low gravity and high seismic activity.

So why would gravity and topography be related to seismic activity?

One possible link is via the frictional behavior of the fault. When two plates rub up against each other, friction between the plates makes it harder for them to slide. If the friction is great enough, the plates will stick. Over long periods of time, as the stuck plates push against each other, they may deform, creating spatial variations in topography and gravity.

In addition to deforming the plates, friction causes stress to build up. When too much stress builds up, the plates will suddenly jump, releasing the strain in the sometimes violent shaking of an earthquake.

If there were no friction between the plates, they would just slide right by each other smoothly, without bending or building up the strain that eventually results in earthquakes.

So in subduction zones, areas under high stress are likely to have greater gravity and topography anomalies, and are also more likely to have earthquakes.

Though this account provides a basic explanation for a rather complicated and unintuitive phenomenon, it is a simplified view, and Song and Simons would like to do more work to refine the details of the relation between the gravity field and large earthquakes.

The gravity anomalies the team considered take a long time to build up, and change very little over timescales up to at least 1 million years. Short-term events such as earthquakes do change the gravity field as the earth's plates suddenly move, but those variations are small compared with the long-term anomalies, which are on the order of 4 x 10^-4 m/s^2.

Because topography and gravity variations persist over periods of time much longer than the typical time between earthquakes, 100 to 1,000 years, large earthquakes should be consistently absent from areas with large positive gravity anomalies, say Song and Simons.

"This study makes a strong connection between long-term tectonic behavior and short-term seismic activity," says Simons, "and thereby provides a class of new observations for understanding earthquake dynamics."

Though no one can tell when or where the next major earthquake will occur, Global Positioning System measurements can show where strain is accumulating. Simons hopes to use such measurements to test the prediction that areas with high gravity will have low strain, and vice versa. The team points out that although large earthquakes occur where gravity and topography are low, there are low-gravity areas in subduction zones with no seismic activity. Furthermore, the research concentrates on subduction zones, and so makes no predictions about other types of faults.

Nonetheless, within a subduction zone known to be earthquake-prone, Simons believes earthquakes are more likely to occur in low-gravity zones. High gravity areas do tend to have few earthquakes. So while the research does not offer a way to predict where earthquakes will happen, it can predict where they won't happen, says Simons.

MEDIA CONTACT: Ernie Tretkoff (626) 395-8733 tretkoff@caltech.edu

Visit the Caltech media relations web site: http://pr.caltech.edu/media

Writer: 
ET

Hydrogen economy might impactEarth's stratosphere, study shows

According to conventional wisdom, hydrogen-fueled cars are environmentally friendly because they emit only water vapor -- a naturally abundant atmospheric gas. But leakage of the hydrogen gas that can fuel such cars could cause problems for the upper atmosphere, new research shows.

In an article appearing this week in the journal Science, researchers from the California Institute of Technology report that the leaked hydrogen gas that would inevitably result from a hydrogen economy, if it accumulates, could indirectly cause as much as a 10-percent decrease in atmospheric ozone. The researchers are physics research scientist Tracey Tromp, Assistant Professor of Geochemistry John Eiler, planetary science professor Yuk Yung, planetary science research scientist Run-Lie Shia, and Jet Propulsion Laboratory scientist Mark Allen.

If hydrogen were to replace fossil fuel entirely, the researchers estimate that 60 to 120 trillion grams of hydrogen would be released each year into the atmosphere, assuming a 10-to-20-percent loss rate due to leakage. This is four to eight times as much hydrogen as is currently released into the atmosphere by human activity, and would result in doubling or tripling of inputs to the atmosphere from all sources, natural or human.

Because molecular hydrogen freely moves up and mixes with stratospheric air, the result would be the creation of additional water at high altitudes and, consequently, an increased dampening of the stratosphere. This in turn would result in cooling of the lower stratosphere and disturbance of ozone chemistry, which depends on a chain of chemical reactions involving hydrochloric acid and chlorine nitrate on water ice.

The estimates of potential damage to stratospheric ozone levels are based on an atmospheric modeling program that tests the various scenarios that might result, depending on how much hydrogen ends up in the stratosphere from all sources, both natural and anthropogenic.

Ideally, a hydrogen fuel-cell vehicle has no environmental impact. Energy is produced by combining hydrogen with oxygen pulled from the atmosphere, and the tailpipe emission is water. The hydrogen fuel could come from a number of sources (Iceland recently started pulling it out of the ground). Nuclear power could be used to generate the electricity needed to split water, and in principle, the electricity needed could also be derived from renewable sources such as solar of wind power.

By comparison, the internal combustion engine uses fossil fuels and produces many pollutants, including soot, noxious nitrogen and sulfur gases, and the "greenhouse gas" carbon dioxide. While a hydrogen fuel-cell economy would almost certainly improve urban air quality, it has the potential unexpected consequences due to the inevitable leakage of hydrogen from cars, hydrogen production facilities, the transportation of the fuel.

Uncertainty remains about the effects on the atmosphere because scientists still have a limited understanding of the hydrogen cycle. At present, it seems likely such emissions could accumulate in the air. Such a build-up would have several consequences, chief of which would be a moistening and cooling of the upper atmosphere and, indirectly, destruction of ozone.

In this respect, hydrogen would be similar to the chlorofluorocarbons (once the standard substance used for air conditioning and refrigeration), which were intended to be contained within their devices, but which in practice leaked into the atmosphere and attacked the stratospheric ozone layer.

The authors of the Science article say that the current situation is unique in that society has the opportunity to understand the potential environmental impact well ahead of the growth of a hydrogen economy. This contrasts with the cases of atmospheric carbon dioxide, methyl bromide, CFCs, and lead, all of which were released into the environment by humans long before their consequences were understood.

"We have an unprecedented opportunity this time to understand what we're getting into before we even switch to the new technology," says Tromp, the lead author. "It won't be like the case with the internal-combustion engine, when we started learning the effects of carbon dioxide decades later."

The question of whether or not hydrogen is bad for the environment hinges on whether the planet has the ability to consume excess anthropogenic hydrogen, explains Eiler. "This man-made hydrogen will either be absorbed in the soil -- a process that is still poorly understood but likely free of environmental consequences -- or react with other compounds in the atmosphere.

"The balance of these two processes will be key to the outcome," says Eiler. "If soils dominate, a hydrogen economy might have little effect on the environment. But if the atmosphere is the big player, the stratospheric cooling and destruction of ozone modeled in this Science paper are more likely to occur.

"Determining which of these two processes dominates should be a solvable problem," states Eiler, whose research group is currently exploring the natural budget of hydrogen using new isotopic techniques.

"Understanding the effects of hydrogen on the environment now should help direct the technologies that will be the basis of a hydrogen economy," Tromp adds. "If hydrogen emissions present an environmental hazard, then recognizing that hazard now can help guide investments in technologies to favor designs that minimize leakage.

"On the other hand, if hydrogen is shown to be environmentally friendly in every respect, then designers could pursue the most cost-effective technologies and potentially save billions in needless safeguards."

"Either way, it's good for society that we have an emission scenario at this stage," says Eiler. "In past cases -- with chlorofluorocarbons, nitrogen oxides, methane, methyl bromide, carbon dioxide, and carbon monoxide -- we always found out that there were problems long after they were in common use. But this time, we have a unique opportunity to study the anthropogenic implications of a new technology before it's even a problem."

If hydrogen indeed turns out to be bad for the ozone layer, should the transition to hydrogen-fueled cars be abandoned? Not necessarily, Tromp and Eiler claim.

"If it's the best way to provide a new energy source for our needs, then we can, and probably should, do it," Tromp says.

Eiler adds, "If we had had perfect foreknowledge of the effects of carbon dioxide a hundred years ago, would we have abandoned the internal combustion engine? Probably not. But we might have begun the process of controlling CO2 emissions earlier."

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Caltech planetary scientist has "modest proposal" for sending probe to Earth's core

PASADENA, Calif. - Dave Stevenson has spent his career working on "swing-by" missions to the other planets. Now he has a modest proposal he'd like to swing by some government agency with a few billion dollars in available funding.

According to Stevenson's calculations, it should be possible to send a probe all the way to Earth's core by combining several proven technologies with a few well-grounded scientific assumptions about the workings of the planet. The probe would sink straight to the core in an envelope of molten iron, sending temperature readings, compositional information, and other data along the way.

"We've spent more than $10 billion in unmanned missions to the planets," says Stevenson, who is the Van Osdol Professor of Planetary Science at the California Institute of Technology. "But we've only been down about 10 kilometers into our own planet."

The benefits to science would be significant, Stevenson says, because so little has been directly observed about the inner workings of the planet. Scientists do not know, for example, the exact composition or even the temperature of the core, and what they do know is based on inferences about seismic data accumulated during earthquakes.

Stevenson says his proposal should be attractive to the scientific community because it is of the same scale, price-wise, as planetary exploration. To date, NASA has flown unmanned missions past all the planets except Pluto (if indeed Pluto is a planet at all), has made a few highly successful soft landings on Mars, has probed the clouds of Jupiter, is getting ready to probe the atmosphere of Titan, and has sent four spacecraft into interstellar space. Sending something into the earth, Stevenson believes, will have comparable payoffs in the quest for knowledge.

"When we fly to other worlds, we are often surprised by what we find, and I think the same will be the case if we go down."

Stevenson's plan calls for a crack to be opened in the earth, perhaps with some sort of explosion-probably a nuclear bomb. According to his figures, the crack will need to be several hundred meters in length and depth, and about 30 centimeters wide, to accommodate a volume of about 100 thousand to several million tons of molten iron.

The instant the crack opens, the entire volume of iron will be dropped in, completely filling the open space. Through the sheer force of its weight, the iron will create a continuing crack that will open all the way to the planet's core 3,000 kilometers below. Anything on a smaller scale may not work; anything larger will be even more expensive, so Stevenson thinks a crack of those dimensions is about right.

"Once you set that condition up, the crack is self-perpetuating," he explains; "it's fundamentally different from drilling, where it gets harder and harder-and eventually futile-the farther you go down."

The iron will continue to fall due to gravity because it is about twice the density of the surrounding material. Riding along in the mass of liquid iron will be one or more probes made of a material robust enough to withstand the heat and pressure. The probe will perhaps be the size of a grapefruit but definitely small enough to ride easily inside the 30-centimeter crack without getting wedged.

Inside the probe will be instrumentation for data collection, which will be relayed through low-intensity mechanical waves of some sort-probably through deformations of the ball itself to send out a sort of "Morse code" of data. Because radio waves cannot propagate through Earth, this is the only way to get the data transferred.

The probe will likely operate with about 10 watts of power, and it may even be possible to replenish energy and dispense with an on-board battery by harnessing mechanical energy from the force of the fall, just as electricity can be generated from falling water.

Such a low power rating will not make it possible to generate very strong shock waves for data transmission, but strong waves may not be necessary. In fact, Stevenson further suggests that the Laser Interferometer Gravitational-Wave Observatory (LIGO) might be recalibrated in its downtime to track the falling ball.

Based on the rate the molten iron would fall due to gravity, the ball would move downward into Earth at roughly human running pace (about 10 miles per hour), meaning that the entire mission would last a few weeks.

All this may sound to some like science fiction, but Stevenson says each of the principles involved is based on sound knowledge of crack propagation, fluid dynamics, mechanical-wave propagation, and "stress states." If these things didn't already work in nature, we would have no volcanoes and poorly performing bathroom plumbing, but little to fear from a pebble shattering our windshields.

"The biggest question is how to initially open the crack," says Stevenson. "Also, there's the technological challenge of having a probe that actually does what it's supposed to do."

Stevenson says he came up with part of the title "A Modest Proposal" for his paper, which is appearing in this week's journal Nature, to have a bit of fun but at the same time to issue a serious scientific proposal. He purposely took the title from Jonathan Swift's famous essay of the same name. The Swift essay suggests that Ireland's terrible economic circumstances could be solved by people eating their own children, thereby allowing England to continue pillaging the country's resources for its own one-sided benefit.

"My proposal is not as outrageous as suggesting one should eat his own children, but still combines a serious proposal with some levity," Stevenson says. "Ninety-five percent of the scientists who read the article may laugh at an enjoyable read, but if the other five percent seriously consider the goal of probing Earth's core, then I'll be happy."

"The biggest question should not be the cost, but whether we should pursue the goal of exploring Earth's interior," he says. "That said, I'd suggest we do it if we can keep the cost under $10 billion."

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Three Caltech Faculty Named to American Academy of Arts and Sciences

PASADENA, Calif. — The American Academy of Arts and Sciences has elected three California Institute of Technology faculty members as academy fellows. They are Fred C. Anson, Elizabeth Gilloon Professor of Chemistry, Emeritus; Joseph L. Kirschvink, professor of geobiology; and Colin F. Camerer, Rea A. and Lela G. Axline Professor of Business Economics.

The 2003 class of 187 fellows and 29 foreign honorary members includes four college presidents, three Nobel laureates, and four Pulitzer Prize winners.

Among this year's new fellows and foreign honorary members are Kofi Annan, Secretary-General of the United Nations; journalist Walter Cronkite; philanthropist William H. Gates, Sr., co-chair of the Bill and Melinda Gates Foundation; novelist Michael Cunningham; recording industry pioneer Ray Dolby; artist Cindy Sherman; and Nobel Prize-winning physicist Donald Glaser.

"It gives me great pleasure to welcome these outstanding and influential individuals to the nation's oldest and most illustrious learned society. Election to the American Academy is an honor that acknowledges the best of all scholarly fields and professions. Newly elected fellows are selected through a highly competitive process that recognizes those who have made preeminent contributions to their disciplines," said academy president Patricia Meyer Spacks.

Anson has carried out pioneering work on the electrochemistry of polymers, on the catalysis of electrode reactions, and on electrochemical reactions that involve ultrathin coating of molecules on electrode surfaces.

Kirschvink, who has been honored by students for his excellence in teaching, studies how biological evolution has influenced, and has been influenced by, major events on the surface of the earth. His most significant contributions include the "snowball" earth theory—the theory that the entire Earth may have actually frozen over several times in its history, possibly stimulating evolution. Another original concept concerns the Cambrian evolutionary explosion that he believes may have been precipitated in part by the earth's rotational axis having moved to the equator in a geologically short interval of time.

Camerer's research in experimental and behavioral economics, integrates psychology with economics to explore the impact on decision sciences and game theory. His research uses economics experiments and field studies to understand how people behave when making decisions. Such research is helpful in predicting economic trends and in understanding social policy. Poverty, war, cross-cultural interactions--most social issues are affected by decision psychology.

The total number of Caltech faculty named to the academy is now 82.

The academy was founded in 1780 by John Adams, James Bowdoin, John Hancock, and other scholar-patriots "to cultivate every art and science which may tend to advance the interest, honor, dignity, and happiness of a free, independent, and virtuous people." The academy has elected as fellows and foreign honorary members the finest minds and most influential leaders from each generation, including George Washington and Ben Franklin in the eighteenth century, Daniel Webster and Ralph Waldo Emerson in the nineteenth, and Albert Einstein and Winston Churchill in the twentieth. The current membership includes more than 150 Nobel laureates and 50 Pulitzer Prize winners. Drawing on the wide-ranging expertise of its membership, the academy conducts thoughtful, innovative, non-partisan studies on international security, social policy, education, and the humanities.

A full list of new members is available on the Academy website at http://www.amacad.org/news/new2003.htm.

The academy will welcome this year's new fellows and foreign honorary members at the annual induction ceremony at the academy's headquarters in Cambridge, Mass., in October.

MEDIA CONTACT: Jill Perry, Media Relations Director (626) 395-3226 jperry@caltech.edu

Visit the Caltech media relations web site: http://pr.caltech.edu/media

Writer: 
jp
Exclude from News Hub: 
Yes

Pages

Subscribe to RSS - GPS