Zombie Behaviors Are Part of Everyday Life, According to Neurobiologists

PASADENA, Ca.--When you're close to that woman you love this Valentine's Day, her fragrance may cause you to say to yourself, "Hmmm, Chanel No. 5," especially if you're the suave, sophisticated kind. Or if you're more of a missing link, you may even say to yourself, "Me want woman." In either case, you're exhibiting a zombie behavior, according to the two scientists who pioneered the scientific study of consciousness.

Longtime collaborators Christof Koch and Francis Crick (of DNA helix fame) think that "zombie agents"--that is, routine behaviors that we perform constantly without even thinking--are so much a central facet of human consciousness that they deserve serious scientific attention. In a new book titled The Quest for Consciousness: A Neurobiological Approach, Koch writes that interest in the subject of zombies has nothing to do with fiction, much less the supernatural. Crick, who for the last 13 years has collaborated with Koch on the study of consciousness, wrote the foreword of the book.

The existence of zombie agents highlights the fact that much of what goes on in our heads escapes awareness. Only a subset of brain activity gives rise to conscious sensations, to conscious feelings. "What is the difference between neuronal activity associated with consciousness and activity that bypasses the conscious mind?" asks Koch, a professor at the California Institute of Technology and head of the Computation and Neural Systems program.

Zombie agents include everything from keeping the body balanced, to unconsciously estimating the steepness of a hill we are about to climb, to driving a car, riding a bike, and performing other routine yet complex actions. We humans couldn't function without zombie agents, whose key advantage is that reaction times are kept to a minimum. For example, if a pencil is rolling off the table, we are quite able to grab it in midair, and we do so by executing an extremely complicated set of mental operations. And zombie agents might also be involved, by way of smell, in how we choose our sexual partners.

"Zombie agents control your eyes, hands, feet, and posture, and rapidly transduce sensory input into stereotypical motor output," writes Koch. "They might even trigger aggressive or sexual behavior when getting a whiff of the right stuff.

"All, however, bypass consciousness," Koch adds. "This is the zombie in you."

Zombie actions are but one of a number of topics that Koch and Crick have investigated since they started working together on the question of the brain basis of consciousness. Much of the book concerns perceptual experiments in normal people, patients, monkeys, and mice, that address the neuronal underpinnings of thoughts and actions.

As Crick points out in his foreword, consciousness is the major unsolved problem in biology. The Quest for Consciousness describes Koch and Crick's framework for coming to grips with the ancient mind-body problem. At the heart of their framework is discovering and characterizing the neuronal correlates of consciousness, the subtle, flickering patterns of brain activity that underlie each and every conscious experience.

The Quest for Consciousness: A Neurobiological Approach will be available in bookstores on February 27. For more information, see www.questforconsciousness.com. For review copies, contact Ben Roberts at Roberts & Company Publishers at (303) 221-3325, or send an e-mail to bwr@roberts-publishers.com.

Writer: 
Robert Tindol
Writer: 

Caltech Engineers Design a Revolutionary Radar Chip

PASADENA, Calif. -- Imagine driving down a twisty mountain road on a dark foggy night. Visibility is near-zero, yet you still can see clearly. Not through your windshield, but via an image on a screen in front of you.

Such a built-in radar system in our cars has long been in the domain of science fiction, as well as wishful thinking on the part of commuters. But such gadgets could become available in the very near future, thanks to the High Speed Integrated Circuits group at the California Institute of Technology.

The group is directed by Ali Hajimiri, an associate professor of electrical engineering. Hajimiri and his team have used revolutionary design techniques to build the world's first radar on a chip--specifically, they have implemented a novel antenna array system on a single, silicon chip.

Hajimiri notes, however, that calling it a "radar on a chip" is a bit misleading because it's not just radar. Having essentially redesigned a computer chip from the ground up, the technology is revolutionary enough to be used for a wide range of applications.

The chip can, for example, serve as a wireless, high-frequency communications link, providing a low-cost replacement for the optical fibers that are currently used for ultrafast communications. Hajimiri's chip runs at 24 GHz (24 billion cycles in one second), an extremely high speed, which makes it possible to transfer data wirelessly at speeds available only to the backbone of the Internet (the main network of connections that carry most of the traffic on the Internet).

Other possible uses:

* In cars, an array of these chips--one each in the front, the back, and each side--could provide a smart cruise control, one that wouldn't just keep the pedal to the metal, but would brake for a slowing vehicle ahead of you, avoid a car that's about to cut you off, or dodge an obstacle that suddenly appears in your path.

While there are other radar systems in development for cars, they consist of a large number of modules that use more exotic and expensive technologies than silicon. Hajimiri's chip could prove superior because of its fully integrated nature. That allows it to be manufactured at a substantially lower price, and makes the chip more robust in response to design variations and changes in the environment, such as heat and cold.

* The chip could serve as the brains inside a robot capable of vacuuming your house. While such appliances now exist, a vacuum using Hajimiri's chip as its brain would clean without constantly bumping into everything, have the sense to stay out of your way, and never suck up the family cat.

* A chip the size of a thumbnail could be placed on the roof of your house, replacing the bulky satellite dish or the cable connections for your DSL. Your picture could be sharper, and your downloads lightning fast.

* A collection of these chips could form a network of sensors that would allow the military to monitor a sensitive area, eliminating the need for constant human patrolling and monitoring.

In short, says Hajimiri, the technology will be useful for numerous applications, limited only by an entrepreneur's imagination.

Perhaps the best thing of all is that these chips are cheap to manufacture, thanks to the use of silicon as the base material. "Traditional radar costs a couple of million dollars," says Hajimiri. "It's big and bulky, and has thousands of components. This integration in silicon allows us to make it smaller, cheaper, and much more widespread."

Silicon is the ubiquitous element used in numerous electronic devices, including the microprocessor inside our personal computers. It is the second most abundant element in the earth's crust (after oxygen), and components made of silicon are cheap to make and are widely manufactured. "In large volumes, it will only cost a few dollars to manufacture each of these radar chips," he says.

"The key is that we can integrate the whole system into one chip that can contain the entire high-frequency analog and high-speed signal processing at a low cost," says Hajimiri. "It's less powerful than the conventional radar used for aviation, but, since we've put it on a single, inexpensive chip, we can have a large number of them, so they can be ubiquitous."

Hajimiri's radar chip, with both a transmitter and receiver (more accurately, a phased-array transceiver) works much like a conventional array of antennas. But unlike conventional radar, which involves the mechanical movement of hardware, this chip uses an electrical beam that can steer the signal in a given direction in space without any mechanical movement.

For communications systems, this ability to steer a beam will provide a clear signal and will clear up the airwaves. Cell phones, for example, radiate their signal omnidirectionally. That's what contributes to interference and clutter in the airwaves. "But with this technology you can focus the beams in the desired direction instead of radiating power all over the place and creating additional interference," says Hajimiri. "At the same time you're maintaining a much higher speed and quality of service."

Hajimiri's research interest is in designing integrated circuits for both wired and wireless high-speed communications systems. (An integrated circuit is a computer chip that serves multiple functions.) Most silicon chips have a single circuit or signal path that a signal will follow; Hajimiri's innovation lies in multiple, parallel circuits on a chip that operate in harmony, thus dramatically increasing speed and overcoming the speed limitations that are inherent with silicon.

Hajimiri says there's already a lot of buzz about his chip, and he hasn't even presented a peer-reviewed paper yet. He'll do so next week at the International Solid State Circuit Conference in San Francisco.

Note to editors: Color pictures of the tiny chip, juxtaposed against a penny, are available.

Media Contact: Mark Wheeler (626) 395-8733 wheel@caltech.edu

Visit the Caltech Media Relations website at http://pr.caltech.edu/media

Writer: 
MW
Writer: 

New Tool for Reading a Molecule's Blueprints

Just as astronomers image very large objects at great distances to understand what makes the universe tick, biologists and chemists need to image very small molecules to understand what makes living systems tick.

Now this quest will be enhanced by a $14,206,289 gift from the Gordon and Betty Moore Foundation to the California Institute of Technology, which will allow scientists at Caltech and Stanford University to collaborate on the building of a molecular observatory for structural molecular biology.

The observatory, to be built at Stanford, is a kind of ultrapowerful X-ray machine that will enable scientists from both institutions and around the world to "read" the blueprints of so-called macromolecules down at the level of atoms. Macromolecules, large molecules that include proteins and nucleic acids (DNA and RNA), carry out the fundamental cellular processes responsible for biological life. By understanding their makeup, scientists can glean how they interact with each other and their surroundings, and subsequently determine how they function. This knowledge, while of inherent importance to the study of biology, could also have significant practical applications, including the design of new drugs.

The foundation of this discovery process, says Doug Rees, a Caltech Professor of Chemistry and an investigator for the Howard Hughes Medical Institute, and one of the principal investigators of the project, is that "if you want to know how something works, you first need to know what it looks like.

"That's why we're excited about the molecular observatory," he says, "because it will allow us to push the boundary of structural biology to define the atomic-scale blueprints of macromolecules that are responsible for these critical cellular functions. This will include the technically demanding analyses of challenging biochemical targets, such as membrane proteins and large macromolecular assemblies, that can only be achieved using such a high-intensity, state of the art observatory."

The primary experimental approach for structural molecular biology is the use of X-ray beams, which can illuminate the three-dimensional structure of a molecule. It does this by blasting a beam of x-rays through a crystallized sample of the molecule, then analyzing the pattern of the scattered beam. According to Keith Hodgson, a Stanford professor and director of the facility where the new observatory will be build, "synchrotrons are powerful tools for such work, because they generate extremely intense, focused X-ray radiation many millions of times brighter than available from a normal x-ray tube." Synchrotron radiation is comprised of the visible and invisible forms of light produced by electrons circulating in a storage ring at nearly the speed of light. Part of the spectrum of synchrotron radiation lies in the x-ray region; the radiation is used to investigate various forms of matter at the molecular and atomic scales, using approaches in part pioneered by Linus Pauling during his time as a faculty member at Caltech in the fifties and sixties.

The new observatory, in technical terms called a beam line, will make use of the extremely bright x-rays produced by a newly installed advanced electron accelerator that is located at Stanford's Synchrotron Radiation Laboratory (SSRL) on the Stanford Linear Accelerator site (SLAC). The exceptional quality and brightness of the x-ray light from this new accelerator is perfectly suited to the study of complicated biological systems. The Foundation gift will be used by Caltech and the SSRL to design and construct a dedicated beam line at SSRL for structural molecular biology research. The x-ray source itself will be based upon a specialized device (called an in-vacuum undulator) that will produce the x-rays used to illuminate the crystalline samples. Specially designed instruments will allow fully automated sample manipulation via a robotic system and integrated software controls. Internet-based tools will allow researchers at Caltech or remote locations to control the experiments and analyze data in real time. An on-campus center to be built will facilitate access by faculty and students to the new beam line.

Knowing the molecular-scale blueprint of macromolecules will ultimately help answer such fundamental questions as "How are the chemical processes underlying life achieved and regulated in cells?" "How does a motor or pump work that is a millionth of a centimeter in size?" "How is information transmitted in living systems?"

"The construction of a high-intensity, state-of-the-art beam line at Stanford, along with an on-campus center here at Caltech to assist in these applications, will complement developments in cryo-electron microscopy that are underway on campus, also made possible through the support of the Gordon and Betty Moore Foundation," notes Caltech provost Steven Koonin.

The SSRL at Stanford is a national user facility operated by the U.S. Department of Energy's Office of Science. "I would like to thank the Gordon and Betty Moore Foundation for this generous gift [to Caltech]," said Dr. Raymond L. Orbach, director of the Office of Science, which oversees the SLAC and the SSRL. "This grant will advance the frontiers of biological science in very important and exciting ways. It also launches a dynamic collaboration between two great universities, Caltech and Stanford, at a Department of Energy research facility, thereby enhancing the investment of the federal government."

The Gordon and Betty Moore Foundation was established in November 2000, by Intel co-founder Gordon Moore and his wife Betty. The Foundation funds outcome-based projects that will measurably improve the quality of life by creating positive outcomes for future generations. Grantmaking is concentrated in initiatives that support the Foundation's principal areas of concern: environmental conservation, science, higher education, and the San Francisco Bay Area.

Writer: 
MW
Exclude from News Hub: 
No

Internet voting will require gradual, rational planning and experimentation, experts write

PASADENA, Calif.--Will Internet voting be a benefit to 21st-century democracy, or could it lead to additional election debacles like the one that occurred in 2000?

According to two experts on voting technology, the use of the Internet for voting can move forward in an orderly and effective way, but there should be experimentation and intelligent planning to ensure that it does so. Michael Alvarez, of the California Institute of Technology, and Thad E. Hall of the Century Foundation write in their new book that two upcoming experiments with Internet voting will provide unique data on how effective Internet voting can be in improving the election process.

On February 7, 2004, the Michigan Democratic Party will allow voters the option of voting over the Internet when casting their ballots in the party caucus. Then, for the presidential election on November 2, voters covered by the Uniformed and Overseas Civilian Absentee Voting Act who are registered in participating states will be able to vote over the Internet thanks to the Federal Voting Assistance Program's Secure Electronic Registration and Voting Experiment (SERVE).

In their book Point, Click, and Vote: The Future of Internet Voting (Washington, D.C.: Brookings Institution Press, 2004), Alvarez and Hall outline a step-by-step approach to moving forward with Internet voting. Their approach focuses primarily on the need for experimentation. Hall notes, "The transition to the widespread use of Internet voting cannot, and should not, occur overnight. There must be a deliberate strategy--involving experimentation and research--that moves along a rational path to Internet voting."

Alvarez and Hall base their conclusions on four key points:

= There should be a series of well-planned, controlled experiments testing the feasibility of Internet voting, targeting either special populations of voters--such as military personnel, individuals living abroad, or people with disabilities--or special types of elections, such as low-turnout local elections.

= Internet security issues must be studied more effectively so that voters can have confidence in the integrity of online voting.

= Legal and regulatory changes must be studied to see what is needed to make Internet voting a reality, especially in the United States. Election law in America varies at the state, county, and local levels, and it is likely that laws in many states will have to be changed to make Internet voting possible.

= The digital divide must be narrowed, so that all voters will have a more equal opportunity to vote over the Internet. Competitive pricing and market forces will help to lower barriers to becoming a part of the online community.

As Alvarez notes, "There were Internet voting trials conducted in 2000, but no meaningful data were collected, making it impossible to know whether they were a success. The 2004 Internet voting trials provide an opportunity to collect the data necessary to understand how Internet voting impacts the electoral process."

Alvarez is a professor of political science at Caltech and is co-director of the Caltech/MIT Voting Technology Project. He was a lead author of Voting: What Is, What Could Be, which was published by the project after the 2000 elections. He has published several books on voting behavior and written numerous articles on the topic as well. In 2001, he testified before Congress about election reform and has appeared as an expert witness in election-related litigation. Alvarez has a Ph.D. in political science from Duke University.

Hall is a program officer with the Century Foundation. He served on the professional staff of the National Commission on Federal Election Reform, where he wrote an analysis of the administration of the 2001 Los Angeles mayoral election, "LA Story: The 2001 Mayoral Election," that was published by the Century Foundation. He has written about voting and election administration for both academic and popular audiences and has testified before Congress on the topic. His forthcoming book examining the policy process in Congress, Authorizing Policy, will be published later this year by the Ohio State University Press. He has a Ph.D. in political science and public policy from the University of Georgia.

Writer: 
Robert Tindol
Writer: 

Astronomers measure distance to star celebrated in ancient literature and legend

PASADENA—The cluster of stars known as the Pleiades is one of the most recognizable objects in the night sky, and for millennia has been celebrated in literature and legend. Now, a group of astronomers has obtained a highly accurate distance to one of the stars of the Pleiades known since antiquity as Atlas. The new results will be useful in the longstanding effort to improve the cosmic distance scale, as well as to research the stellar life-cycle. In the January 22 issue of the journal Nature, astronomers from the California Institute of Technology and the Jet Propulsion Laboratory report the best-ever distance to the double-star Atlas. The star, along with "wife" Pleione and their daughters, the "seven sisters," are the principal stars of the Pleiades that are visual to the unaided eye, although there are actually thousands of stars in the cluster. Atlas, according to the team's decade of careful interferometric measurements, is somewhere between 434 and 446 light-years from Earth.\

The range of distance to the Pleiades cluster may seem somewhat imprecise, but in fact is accurate by astronomical standards. The traditional method of measuring distance is by noting the precise position of a star and then measuring its slight change in position when Earth itself has moved to the other side of the sun. This approach can also be used to find distance on Earth. If you carefully record the position of a tree an unknown distance away, move a specific distance to your side, and measure how far the tree has apparently "moved," it's possible to calculate the actual distance to the tree by using trigonometry.

However, this procedure gives only a rough estimate to the distance of even the nearest stars, due to the gigantic distances involved and the subtle changes in stellar position that must be measured. Further, the team's new measurement settles a controversy that arose when the European satellite Hipparcos provided a distance measurement to the Pleiades so much nearer the distance than assumed that the findings contradicted theoretical models of the life cycles of stars.

This contradiction was due to the physical laws of luminosity and its relationship to distance. A 100-watt light bulb one mile away looks exactly as bright as a 25-watt light bulb half a mile away. So to figure out the wattage of a distant light bulb, we have to know how far away it is. Similarly, to figure out the "wattage" (luminosity) of observed stars, we have to measure how far away they are. Theoretical models of the internal structure and nuclear reactions of stars of known mass also predict their luminosities. So the theory and measurements can be compared.

However, the Hipparcos data provided a distance lower than that assumed from the theoretical models, thereby suggesting either that the Hipparcos distance measurements themselves were off, or else that there was something wrong with the models of the life cycles of stars. The new results show that the Hipparcos data was in error, and that the models of stellar evolution are indeed sound. The new results come from careful observation of the orbit of Atlas and its companion--a binary relationship that wasn't conclusively demonstrated until 1974 and certainly was unknown to ancient watchers of the sky. Using data from the Mt. Wilson stellar interferometer (located next to the historic Mt. Wilson Observatory in the San Gabriel range) and the Palomar Testbed Interferometer at Caltech's Palomar Observatory in San Diego County, the team determined a precise orbit of the binary. Interferometry is an advanced technique that allows, among other things, for the "splitting" of two bodies that are so far away that they normally appear as a single blur, even in the biggest telescopes. Knowing the orbital period and combining it with orbital mechanics allowed the team to infer the distance between the two bodies, and with this information, to calculate the distance of the binary to Earth. "For many months I had a hard time believing our distance estimate was 10 percent larger than that published by the Hipparcos team," said the lead author, Xiao Pei Pan of JPL. "Finally, after intensive rechecking, I became confident of our result."

Coauthor Shrinivas Kulkarni, MacArthur Professor of Astronomy and Planetary Science at Caltech, said, "Our distance estimate shows that all is well in the heavens. Stellar models used by astronomers are vindicated by our value." "Interferometry is a young technique in astronomy and our result paves the way for wonderful returns from the Keck Interferometer and the anticipated Space Interferometry Mission that is expected to be launched in 2009," said coauthor Michael Shao of JPL. Shao is also the principal scientist for the Keck Interferometer and the Space Interferometry Mission. The Palomar Testbed Interferometer was designed and built by a team of researchers from JPL led by Shao and JPL engineer Mark Colavita. Funded by NASA, the interferometer is located at the Palomar Observatory near the historic 200-inch Hale Telescope. The device served as an engineering testbed for the interferometer that now links the 10-meter Keck Telescopes atop Mauna Kea in Hawaii.

 

Writer: 
Robert Tindol
Writer: 

Caltech geophysicists gain new insights on Earth's core–mantle boundary

Earth's core–mantle boundary is a place none of us will ever go, but researchers using a special high-velocity cannon have produced results showing there may be molten rock at this interface at about 1,800 miles. Further, this molten rock may have rested peacefully at the core-mantle boundary for eons.

In a presentation at the fall meeting of the American Geophysical Union (AGU) today, California Institute of Technology geophysics professor Tom Ahrens reports new measurements of the density and temperature of magnesium silicate--the stuff found in Earth's interior--when it is subjected to the conditions that exist at the planet's core-mantle boundary.

The Caltech team did their work in the institute's shock wave laboratory, where an 80-foot light-gas gun is specially prepared to fire one-ounce tantalum-faced plastic bullets at mineral samples at speeds up to 220 thousand feet per second--about a hundred times faster than a bullet fired from a conventional rifle. The 30-ton apparatus uses compressed hydrogen as a propellant, and the resulting impact replicates the 1.35 million atmospheres of pressure and the 8,500 degrees Fahrenheit temperature that exist at the core–mantle boundary.

The measurements were conducted using natural, transparent, semiprecious gem crystals of enstatite from Sri Lanka, as well as synthetic glass of the same composition. Upon compression, these materials transform to a 30–percent denser structure called perovskite, which also dominates Earth's lower mantle at depths from 415 miles to the core–mantle boundary.

According to Ahrens, the results "have significant implications for understanding the core–mantle boundary region in the Earth's interior, the interface between rocky mantle and metallic core." The report represents the work of Ahrens and assistant professor of geology and geochemistry Paul Asimow, along with graduate students Joseph Akins and Shengnian Luo.

The researchers demonstrated by two independent experimental methods that the major mineral of Earth's lower mantle, magnesium silicate in the perovskite structure, melts at the pressure of the core–mantle boundary to produce a liquid whose density is greater than or equal to the mineral itself. This implies that a layer of partially molten mantle would be gravitationally stable over geologic times at the boundary, where seismologists have discovered anomalous features best explained by the presence of partial melt.

Two types of experiments were conducted: pressure-density experiments and shock temperature measurements. In the pressure-density experiments, the velocity of the projectile prior to impact and the velocity of the shock wave passing through the target after impact are measured using high-speed optical and x-ray photography. These measurements allow calculation of the pressure and density of the shocked target material. In shock temperature measurements, thermal emission from the shocked sample at visible and near-infrared wavelengths is monitored with a six-channel pyrometer, and the brightness and spectral shape are converted to temperature.

In both types of experiments, the shock wave takes about one ten-millionth of a second to pass through the dime-sized sample, and the velocity and optical emission measurements must resolve this extremely short duration event.

The pressure-density experiments yielded a surprising result. When the glass starting material is subjected to increasingly strong shocks, densities are at first consistent with the perovskite structure, and then a transition is made to a melt phase at a pressure of 1.1 million atmospheres. As expected for most materials under ordinary conditions, the melt phase is less dense than the solid. Shock compression of the crystal starting material, however, follows a lower temperature path, and the transition from perovskite shock states to molten shock states does not occur until a pressure of 1.7 million atmospheres is reached. At this pressure, the liquid appears to be 3 to 4 percent denser than the mineral. Like water and ice at ordinary pressure and 32 °F, under these high-pressure conditions the perovskite solid would float and the liquid would sink.

Just as the negative volume change on the melting of water ice is associated with a negative slope of the melting curve in pressure-temperature space (which is why ice-skating works-- the pressure of the skate blade transforms ice to water at a temperature below the ordinary freezing point), this result implies that the melting curve of perovskite should display a maximum temperature somewhere between 1.1 and 1.7 million atmospheres, and a negative slope at 1.7 million atmospheres. This implication of the pressure-density results was tested using shock temperature measurements. In a separate series of experiments on the same starting materials, analysis of the emitted light constrained the melting temperature at 1.1 million atmospheres to about 9,900 °F. However, at the higher pressure of 1.7 million atmospheres, the melting point is 8,500o F. This confirms that somewhere above 1.1 million atmospheres, the melting temperature begins to decrease with increasing pressure and the melting curve has a negative slope.

Taking the results of both the pressure-density and shock temperature experiments together confirms that the molten material may be neutrally or slightly negatively buoyant at the pressure of the base of the mantle, which is 1.35 million atmospheres. Molten perovskite would, however, still be much less dense than the molten iron alloy of the core. If the mantle were to melt near the core–mantle boundary, the liquid silicate could be gravitationally stable in place or could drain downwards and pond immediately above the core–mantle boundary. The work has been motivated by the 1995 discovery of ultralow velocity zones at the base of the Earth's mantle by Donald Helmberger, who is the Smits Family Professor of Geophysics and Planetary Science at Caltech, and Edward Garnero, who was then a Caltech graduate student and is now a professor at Arizona State University. These ultralow velocity zones (notably underneath the mid-Pacific region) appear to be 1-to-30-mile-thick layers of very low-seismic-velocity rock just above the interface between Earth's rocky mantle and the liquid core of the Earth, at a depth of 1,800 miles.

Helmberger and Garnero showed that, in this zone, seismic shear waves suffer a 30 percent decrease in velocity, whereas compressional wave speeds decrease by only 10 percent. This behavior is widely attributed to the presence of some molten material. Initially, many researchers assumed that this partially molten zone might represent atypical mantle compositions, such as a concentration of iron-bearing silicates or oxides with a lower melting point than ordinary mantle--about 7,200 oF at this pressure.

The new results, however, indicate that the melting temperature of normal mantle composition is low enough to explain melting in the ultralow velocity zones, and that this melt could coexist with residual magnesium silicate perovskite solids. Thus the new Caltech results indicate that no special composition is required to induce an ultralow velocity zone just above the core–mantle boundary or to allow it to remain there without draining away. The patchiness of the ultralow velocity zones suggests that Earth's lowermost mantle temperatures can be just hotter than, or just cooler than, the temperature that is required to initiate melting of normal mantle at a depth of 1,800 miles.

Writer: 
Robert Tindol
Writer: 

Caltech, SLAC, and LANL Set New Network Performance Marks

PHOENIX, Ariz.--Teams of physicists, computer scientists, and network engineers from Caltech, SLAC, LANL, CERN, Manchester, and Amsterdam joined forces at the Supercomputing 2003 (SC2003) Bandwidth Challenge and captured the Sustained Bandwidth Award for their demonstration of "Distributed particle physics analysis using ultra-high speed TCP on the Grid," with a record bandwidth mark of 23.2 gigabits per second (or 23.2 billion bits per second).

The demonstration served to preview future Grid systems on a global scale, where communities of hundreds to thousands of scientists around the world would be able to access, process, and analyze terabyte-sized data samples, drawn from data stores thousands of times larger. A new generation of Grid systems is being developed in the United States and Europe to meet these challenges, and to support the next generation of high-energy physics experiments that are now under construction at the CERN laboratory in Geneva.

The currently operating high-energy physics experiments at SLAC (Palo Alto, California), Fermilab (Batavia, Illinois), and BNL (Upton, New York) are facing qualitatively similar challenges.

During the Bandwidth Challenge, the teams used all three of the 10 gigabit/sec wide-area network links provided by Level 3 Communications and Nortel, connecting the SC2003 site to Los Angeles, and from there to the Abilene backbone of Internet2, the TeraGrid, and to Palo Alto using a link provided by CENIC and National LambdaRail. The bandwidth mark achieved was more than 500,000 times faster than an Internet user with a typical modem connection (43 kilobits per second). The amount of TCP data transferred during the 48-minute-long demonstration was over 6.6 terabytes (or 6.6 trillion bytes). Typical single-stream host-to-host TCP data rates achieved were 3.5 to 5 gigabits per second, approaching the single-stream bandwidth records set last month by Caltech and CERN.

The data, generated from servers at the Caltech Center for Advanced Computing Research (CACR), SLAC, and LANL booths on the SC2003 showroom floor at Phoenix, a cluster at the StarLight facility in Chicago as well as the TeraGrid node at Caltech, was sent to sites in four countries (USA, Switzerland, Netherlands, and Japan) on three continents. Participating sites in the winning effort were the Caltech/DataTAG and Amsterdam/SURFnet PoPs at Chicago (hosted by StarLight), the Caltech PoP at Los Angeles (hosted by CENIC), the SLAC PoP at Palo Alto, the CERN and the DataTAG backbone in Geneva, the University of Amsterdam and SURFnet in Amsterdam, the AMPATH PoP at Florida International University in Miami, and the KEK Laboratory in Tokyo. Support was provided by DOE, NSF, PPARC, Cisco Systems, Level 3, Nortel, Hewlett-Packard, Intel, and Foundry Networks.

The team showed the ability to use efficiently both dedicated and shared IP backbones. Peak traffic on the Los Angeles-Phoenix circuit, dedicated to this experiment, reached almost 10 gigabits per second utilizing more than 99 percent of the capacity. On the shared Abilene and TeraGrid circuits the experiment was able to share fairly over 85 percent of the available bandwidth. Snapshots of the maximum link utilizations during the demonstration showed 8.7 gigabits per second on the Abilene link and 9.6 gigabits per second on the TeraGrid link.

This performance would never have been achieved without the use of new TCP implementations because the widely deployed TCP RENO protocol performs poorly at gigabit-per-second speed. The primary TCP algorithm used was new FAST TCP stack developed at the Caltech Netlab. Additional streams were generated using HS-TCP, implemented at Manchester, and scalable TCP.

Harvey Newman, professor of physics at Caltech, said: "This was a milestone in our development of wide-area networks and of global data-intensive systems for science. Within the past year we have learned how to use shared networks up to the 10 gigabit-per-second range effectively. In the next round we will combine these developments with the dynamic building of optical paths across countries and oceans. This paves the way for more flexible, efficient sharing of data by scientists in many countries, and could be a key factor enabling the next round of physics discoveries at the high-energy frontier. There are also profound implications for integrating information sharing and on-demand audiovisual collaboration in our daily lives, with a scale and quality previously unimaginable."

Les Cottrell, assistant director of SLAC's computer services, said: "This demonstrates that commonly available standard commercial hardware and software, from vendors like Cisco, can effectively and fairly use and fill up today's high-speed Internet backbones, and sustain TCP flows of many gigabits per second on both dedicated and shared intracountry and transcontinental networks. As 10 gigabit-per-second Ethernet equipment follows the price reduction curve experienced by earlier lower-speed standards, this will enable the next generation of high-speed networking and will catalyze new data-intensive applications in fields such as high-energy physics, astronomy, global weather, bioinformatics, seismology, medicine, disaster recovery, and media distribution."

Wu-chun (Wu) Feng, team leader of research and development in Advanced Network Technology in the Advanced Computing Laboratory at LANL, noted: "The SC2003 Bandwidth Challenge provided an ideal venue to demonstrate how a multi-institutional and multi-vendor team can quickly come together to achieve a feat that would otherwise be unimaginable today. Through the collaborative efforts of Caltech, SLAC, LANL, CERN, Manchester, and Amsterdam, we have once again pushed the envelope of high-performance networking. Moore's law move over!"

"Cisco was very pleased to help support the SC2003 show infrastructure, SCINET," said Bob Aiken, director of engineering for academic research and technology initiatives at Cisco. "In addition, we also had the opportunity to work directly with the high-energy physics (HEP) research community at SLAC and Caltech in the United States, SURFnet in the Netherlands, CERN in Geneva, and KEK in Japan, to once again establish a new record for advanced network infrastructure performance.

"In addition to supporting network research on the scaling of TCP, Cisco also provided a wide variety of solutions, including Cisco Systems ONS 15540, Cisco ONS 15808, Cisco Catalyst 6500 Series, Cisco 7600 Series, and Cisco 12400 Series at the HEP sites in order for them to attain their goal. The Cisco next-generation 10 GE line cards deployed at SC2003 were part of the interconnect between the HEP sites of Caltech, SLAC, CERN, KEK/Japan, SURFnet, StarLight, and the CENIC network."

"Level 3 was pleased to support the SC2003 conference again this year," said Paul Fernes, director of business development for Level 3. "We've provided network services for this event for the past three years because we view the conference as a leading indicator of the next generation of scientific applications that distinguished researchers from all over the world are working diligently to unleash. Level 3 will continue to serve the advanced networking needs of the research and academic community, as we believe that we have a technologically superior broadband infrastructure that can help enable new scientific applications that are poised to significantly contribute to societies around the globe."

Cees de Laat, associate professor at the University of Amsterdam and organizer of the Global Lambda Integrated Facility (GLIF) Forum, added: "This world-scale experiment combined leading researchers, advanced optical networks, and network research sites to achieve this outstanding result. We were able to glimpse a yet-to-be explored network paradigm, where both shared and dedicated paths are exploited to map the data flows of big science onto a hybrid network infrastructure in the most cost-effective way. We need to develop a new knowledge base to use wavelength-based networks and Grids effectively, and projects such as UltraLight, TransLight, NetherLight, and UKLight, in which the team members are involved, have a central role to play in reaching this goal."

###

About Caltech: With an outstanding faculty, including four Nobel laureates, and such off-campus facilities as the Jet Propulsion Laboratory, Palomar Observatory, and the W. M. Keck Observatory, the California Institute of Technology is one of the world's major research centers. The Institute also conducts instruction in science and engineering for a student body of approximately 900 undergraduates and 1,000 graduate students who maintain a high level of scholarship and intellectual achievement. Caltech's 124-acre campus is situated in Pasadena, California, a city of 135,000 at the foot of the San Gabriel Mountains, approximately 30 miles inland from the Pacific Ocean and 10 miles northeast of the Los Angeles Civic Center. Caltech is an independent, privately supported university, and is not affiliated with either the University of California system or the California State Polytechnic universities. http://www.caltech.edu

About SLAC: The Stanford Linear Accelerator Center (SLAC) is one of the world 's leading research laboratories. Its mission is to design, construct, and operate state-of-the-art electron accelerators and related experimental facilities for use in high-energy physics and synchrotron radiation research. In the course of doing so, it has established the largest known database in the world, which grows at 1 terabyte per day. That, and its central role in the world of high-energy physics collaboration, places SLAC at the forefront of the international drive to optimize the worldwide, high-speed transfer of bulk data. http://www.slac.stanford.edu/

About LANL: Los Alamos National Laboratory is operated by the University of California for the National Nuclear Security Administration of the U.S. Department of Energy and works in partnership with NNSA's Sandia and Lawrence Livermore National Laboratories to support NNSA in its mission. Los Alamos enhances global security by ensuring the safety and reliability of the U.S. nuclear weapons stockpile, developing technical solutions to reduce the threat of weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and national security concerns. http://www.lanl.gov/

About Netlab: Netlab is the Networking Laboratory at Caltech led by Professor Steven Low, where FAST TCP has been developed. The group does research in the control and optimization of protocols and networks, and designs, analyzes, implements, and experiments with new algorithms and systems. http://netlab.caltech.edu/FAST/

About CERN: CERN, the European Organization for Nuclear Research, has its headquarters in Geneva. At present, its member states are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland, and the United Kingdom. Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission, and UNESCO have observer status. For more information, see http://www.cern.ch.

About DataTAG: The European DataTAG is a project co-funded by the European Union, the U.S. Department of Energy through Caltech, and the National Science Foundation. It is led by CERN together with four other partners. The project brings together the following European leading research agencies: Italy's Istituto Nazionale di Fisica Nucleare (INFN), France's Institut National de Recherche en Informatique et en Automatique (INRIA), the U.K.'s Particle Physics and Astronomy Research Council (PPARC), and the Netherlands' University of Amsterdam (UvA). The DataTAG project is very closely associated with the European Union DataGrid project, the largest Grid project in Europe also led by CERN. For more information, see http://www.datatag.org.

About StarLight: StarLight is an advanced optical infrastructure and proving ground for network services optimized for high-performance applications. Operational since summer 2001, StarLight is a 1 GE and 10 GE switch/router facility for high-performance access to participating networks and also offers true optical switching for wavelengths. StarLight is being developed by the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC), the International Center for Advanced Internet Research (iCAIR) at Northwestern University, and the Mathematics and Computer Science Division at Argonne National Laboratory, in partnership with Canada's CANARIE and the Netherlands' SURFnet. STAR TAP and StarLight are made possible by major funding from the U.S. National Science Foundation to UIC. StarLight is a service mark of the Board of Trustees of the University of Illinois. See www.startap.net/starlight.

About the University of Manchester: The University of Manchester, located in the United Kingdom, was first granted a Royal Charter in April 1880 as the Victoria University and became the first of the U.K.'s great civic universities. As a full-range university it now has more than 70 departments involved in teaching and research, with more than 2,000 academic staff. There are more than 18,000 full-time students, including 2,500 international students, from over 120 countries studying for undergraduate and postgraduate level degrees. The University of Manchester has a proud tradition of innovation and excellence which continues today. Some of the key scientific developments of the century have taken place here. In Manchester, Rutherford conducted the research which led to the splitting of the atom and the world's first stored-program electronic digital computer, built by Freddie Williams and Tom Kilburn, successfully executed its first program in June 1948. The departments of Physics, Computational Science, Computer Science and the Network Group together with the E-Science North West Centre research facility are very active in developing a wide range of e-science projects and Grid technologies. See www.man.ac.uk.

About National LambdaRail: National LambdaRail (NLR) is a major initiative of U.S. research universities and private sector technology companies to provide a national scale infrastructure for research and experimentation in networking technologies and applications. NLR puts the control, the power, and the promise of experimental network infrastructure in the hands of the nation's scientists and researchers. Visit http://www.nationallambdarail.org for more information.

About CENIC: CENIC is a not-for-profit corporation serving California Institute of Technology, California State University, Stanford University, University of California, University of Southern California, California Community Colleges, and the statewide K-12 school system. CENIC's mission is to facilitate and coordinate the development, deployment, and operation of a set of robust multi-tiered advanced network services for this research and education community. http://www.cenic.org

About University of Amsterdam: The Advanced Internet Research group of the University of Amsterdam's Faculty of Science researches new architectures and protocols for the Internet. It actively participates in worldwide standardization organizations Internet Engineering Task Force and the Global Grid Forum. The group conducts experiments with extremely high-speed network infrastructures. The Institute carries out groundbreaking research in the fields of security, authorization, authentication and accounting for grid environments. The Institute is developing a virtual laboratory based on grid technology for e-science applications. For more information see http://www.science.uva.nl/research/air>www.science.uva.nl/research/air.

Writer: 
Robert Tindol
Writer: 

The length of the gaze affects human preferences, new study shows

PASADENA, Calif.—Beauty may be in the eye of the beholder, but a new psychophysical study from the California Institute of Technology suggests that the length of the beholding is important, too.

In an article appearing in the December 2003 issue of the journal Nature Neuroscience, Caltech biology professor Shinsuke Shimojo and his colleagues report that human test subjects asked to choose between two faces will spend increasingly more time gazing at the face that they will eventually choose as the one more attractive. Also, test subjects will typically choose the face that has been preferentially shown for a longer time by the experimenter. In addition, the results show that the effect of gaze duration on preference also holds true for choices between abstract geometric figures.

The findings show that human preferences may be more fundamentally tied to "feedback" between the very act of gazing and the internal, cognitive prototype of attractiveness than was formerly assumed. Earlier work by other researchers has relied on the "attractiveness template," which assumes that an individual's ideal conception of beauty has somehow been imprinted on his or her brain due to early exposures to other people's faces, such as the mother.

In fact, Shimojo says, the new results come from experiments especially designed to minimize the influence of earlier biases and existing preferences. Even when images of faces have been computer-processed to eliminate possible biases due to ethnic origins and even such trivial factors as hairstyles, the results still show strongly that the gaze is subconsciously oriented toward the eventual choice. This holds true even more strongly when a test subject is asked to choose between two abstract geometric figures, suggesting that the slightly lower tendency to fix the gaze on the eventual choice of two faces is influenced by existing selection biases that cannot be totally controlled.

The findings in Nature Neuroscience comprise two experiments. The first was the choice of the more attractive face, in which all the test subjects were asked to rate the faces from 1 (very unattractive) to 7 (very attractive). The average rating for each face was then calculated so that faces in pairs could be matched in different ways.

In the "face-attractiveness-easy task" the faces were paired according to gender, race, and neutrality of facial expressions, but comprised a choice of a "very unattractive" face with a "very attractive" face. Five test subjects were then shown 19 face pairs and were asked to choose the face they preferred. A video camera recorded the movements of their eyes as they directed their attention from one face on the screen to the other.

The results showed that the likelihood of gaze of the test subjects started from chance (50 percent) but rose above 70 percent of their time gazing at the face till they chose that face.

Even more striking was the difference in gaze devoted to the "face-attractiveness-difficult task," in which 30 pairs of faces were matched according to the closeness in which they had been ranked for attractiveness. In this experiment, the test subjects spent up to 83 percent of their time gazing at the face they would choose immediately before their decision response, suggesting that the gaze is even more important when there is little difference in the features of stimuli themselves.

The test subjects were also asked to choose the least attractive face, as well as the rounder face, and the results also showed that the length of the gaze was an important indicator of the eventual choice. In addition, the subjects were asked to choose between abstract geometric shapes, and the length of gaze also correlated highly with the eventual choice.

The second experiment is "gaze manipulation," in which the faces are not shown simultaneously, but in sequences of varying duration on the two sides of the computer screen. In other words, one face was shown for a longer time (900 milliseconds) than the other face (300 milliseconds), and as a control, the faces were also shown to other subjects in the center of the screen in an alternating sequence.

The results show that the face shown for a longer time tends to be chosen at chance level (50 percent) with only two repetitions of the sequence, but about 59 percent of the time with 12 repetitions. This suggests that the duration of the gaze can influence the choice. However, this manipulation did not work in the control experiment without gaze shift, as mentioned above, indicating that it is not mere exposure time, but rather active gaze shift, that made the differences.

In sum, the results indicate that active orienting by gaze shift is wired into the brain and that humans use it all the time, albeit subconsciously, Shimojo says. One example is our preference for good eye contact with people whom we are engaging in conversation.

"If I look directly into your eyes, then glance at your ears, you can immediately tell that I've broken eye contact, even if we're some distance apart," Shimojo explains. "This shows that there are subtle clues to what's in the mind."

In addition to Shimojo, the other authors are Claudiu Simion, a graduate student in biology at Caltech; Christian Scheier, a former postdoctoral researcher in Shimojo's lab; and Eiko Shimojo, of the School of Human Studies/Psychology at Bunkyo Gakuin University in Japan. Shinsuke Shimojo and Claudiu Simion contributed equally to the work. 

Writer: 
Robert Tindol
Writer: 

Gamma-Ray Bursts, X-Ray Flashes, and Supernovae Not As Different As They Appear

PASADENA, Calif.—For the past several decades, astrophysicists have been puzzling over the origin of powerful but seemingly different explosions that light up the cosmos several times a day. A new study this week demonstrates that all three flavors of these cosmic explosions--gamma-ray bursts, X-ray flashes, and certain supernovae of type Ic--are in fact connected by their common explosive energy, suggesting that a single type of phenomenon, the explosion of a massive star, is the culprit. The main difference between them is the "escape route" used by the energy as it flees from the dying star and its newly born black hole.

In the November 13 issue of the journal Nature, Caltech graduate student Edo Berger and an international group of colleagues report that cosmic explosions have pretty much the same total energy, but this energy is divided up differently between fast and slow jets in each explosion. This insight was made possible by radio observations, carried out at the National Radio Astronomy Observatory's Very Large Array (VLA), and Caltech's Owens Valley Radio Observatory, of a gamma-ray burst that was localized by NASA's High Energy Transient Explorer (HETE) satellite on March 29 of this year.

The burst, which at 2.6 billion light-years is the closest classical gamma-ray burst ever detected, allowed Berger and the other team members to obtain unprecedented detail about the jets shooting out from the dying star. The burst was in the constellation Leo.

"By monitoring all the escape routes, we realized that the gamma rays were just a small part of the story for this burst," Berger says, referring to the nested jet of the burst of March 29, which had a thin core of weak gamma rays surrounded by a slow and massive envelope that produced copious radio waves.

"This stumped me," Berger adds, "because gamma-ray bursts are supposed to produce mainly gamma rays, not radio waves!"

Gamma-ray bursts, first detected accidentally decades ago by military satellites watching for nuclear tests on Earth and in space, occur about once a day. Until now it was generally assumed that the explosions are so titanic that the accelerated particles rushing out in antipodal jets always give off prodigious amounts of gamma radiation, sometimes for hundreds of seconds. On the other hand, the more numerous supernovae of type Ic in our local part of the universe seem to be weaker explosions that produce only slow particles. X-ray flashes were thought to occupy the middle ground.

"The insight gained from the burst of March 29 prompted us to examine previously studied cosmic explosions," says Berger. "In all cases we found that the total energy of the explosion is the same. This means that cosmic explosions are beasts with different faces but the same body."

According to Shri Kulkarni, MacArthur Professor of Astronomy and Planetary Science at Caltech and Berger's thesis supervisor, these findings are significant because they suggest that many more explosions may go undetected. "By relying on gamma rays or X rays to tell us when an explosion is taking place, we may be exposing only the tip of the cosmic explosion iceberg."

The mystery we need to confront at this point, Kulkarni adds, is why the energy in some explosions chooses a different escape route than in others.

At any rate, adds Dale Frail, an astronomer at the VLA and coauthor of the Nature manuscript, astrophysicists will almost certainly make progress in the near future. In a few months NASA will launch a gamma-ray detecting satellite known as Swift, which is expected to localize about 100 gamma-ray bursts each year. Even more importantly, the new satellite will relay very accurate positions of the bursts within one or two minutes of initial detection.

The article appearing in Nature is titled "A Common Origin for Cosmic Explosions Inferred from Calorimetry of GRB 030329." In addition to Berger, the lead author, and Kulkarni and Frail, the other authors are Guy Pooley, of Cambridge University's Mullard Radio Astronomy Observatory; Vince McIntyre and Robin Wark, both of the Australia Telescope National Facility; Re'em Sari, associate professor of astrophysics and planetary science at Caltech; Derek Fox, a postdoctoral scholar in astronomy at Caltech; Alicia Soderberg, a graduate student in astrophysics at Caltech; Sarah Yost, a postdoctoral scholar in physics at Caltech; and Paul Price, a postdoctoral scholar at the University of Hawaii's Institute for Astronomy.

Writer: 
Robert Tindol
Writer: 

Atmospheric scientists still acquire samples the old-fashioned way--by flying up and getting them

PASADENA, Calif.—Just as Ishmael always returned to the high seas for whales after spending time on land, an atmospheric researcher always returns to the air for new data.

All scientific disciplines depend on the direct collection of data on natural phenomena to one extent or another. But atmospheric scientists still find it especially important to do some empirical data-gathering, and the best way to get what they need is by taking up a plane and more or less opening a window.

At the California Institute of Technology, where atmospheric science is a major interest involving researchers in several disciplines, the collection of data is considered important enough to justify the maintenance of a specially equipped plane dedicated to the purpose. In addition to the low-altitude plane, several Caltech researchers who need higher-altitude data are also heavy users of the jet aircraft maintained by NASA for its Airborne Science Program--a longstanding but relatively unsung initiative with aircraft based at the Dryden Flight Research Center in California's Mojave Desert.

"The best thing about using aircraft instead of balloons is that you are assured of getting your instruments back in working order," says Paul Wennberg, professor of atmospheric chemistry and environmental engineering science. Wennberg, whose work has been often cited in policy debates about the human impact on the ozone layer, often relies on the NASA suborbital platforms (i.e., various piloted and drone aircraft operating at mid to high altitudes) to collect his data.

Wennberg's experiments typically ride on the high-flying ER-2, which is a revamped reconnaissance U-2. The plane has room for the pilot only, which means that the experimental equipment has to be hands-free and independent of constant technical attention. Recently, Wennberg's group has made measurements from a reconfigured DC-8 that has room for some 30 passengers, depending on the scientific payload, but the operating ceiling is some tens of thousands of feet lower than that of the ER-2.

"The airplane program has been the king for NASA in terms of discoveries," Wennberg says. "Atmospheric science, and certainly atmospheric chemistry, is still very much an observational field. The discoveries we've made have not been by modeling, but by consistent surprise when we've taken up instruments and collected measurements."

In his field of atmospheric chemistry, Wennberg says the three foundations are laboratory work, synthesis and modeling, and observational data--the latter being still the most important.

"You might have hoped we'd be at the place where we could go to the field as a confirmation of what we did back in the lab or with computer programs, but that's not true. We go to the field and see things we don't understand."

Wennberg sometimes worries about the public perception of the value of the Airborne Science Program because the launching of a conventional jet aircraft is by no means as glamorous or romantic as the blasting off of a rocket from Cape Canaveral. By contrast, his own data-collection would appear to most as bread-and-butter work involving a few tried-and-true jet airplanes.

"If you hear that the program uses 'old technology,' this refers to the planes themselves and not the instruments, which are state-of-the-art," he says. "The platforms may be old, but it's really a vacuous argument to say that the program is in any way old.

"I would argue that the NASA program is a very cost-effective way to go just about anywhere on Earth and get data."

Chris Miller, who is a mission manager for the Airborne Science Program at the Dryden Flight Research Center, can attest to the range and abilities of the DC-8 by merely pointing to his control station behind the pilot's cabin. On his wall are mounted literally dozens of travel stick-ons from places around the world where the DC-8 passengers have done research. Included are mementos from Hong Kong, Singapore, New Zealand, Australia, Japan, Thailand, and Greenland, to name a few.

"In addition to atmospheric chemistry, we also collect data for Earth imaging, oceanography, agriculture, disaster preparedness, and archaeology," says Miller. "There can be anywhere from two or three to 15 experiments on a plane, and each experiment can be one rack of equipment to half a dozen."

Wennberg and colleagues Fred Eisele of the National Center for Atmospheric Research and Rick Flagan, who is McCollum Professor of Chemical Engineering, have developed special instrumentation to ride on the ER-2. One of their new instruments is a selected-ion- chemical ionization mass spectrometer, which is used to study the composition of the atmospheric aerosols and the mechanisms that lead to its production.

Caltech's Nohl Professor and professor of chemical engineering, John Seinfeld, conducts an aircraft program that is a bit more down-to-earth, at least in the literal sense.

Seinfeld is considered perhaps the world's leading authority on atmospheric particles or so-called aerosols--that is, all the stuff in the air like sulfur compounds and various other pollutants not classifiable as a gas. Seinfeld and his associates study primarily atmospheric particles, their size, their composition, their optical properties, their effect on solar radiation, their effect on cloud formation, and ultimately their effect on Earth's climate.

"Professor Rick Flagan and I have been involved for a number of years in an aircraft program largely funded by the Office of Naval Research, and established jointly with the Naval Postgraduate School in Monterey. The joint program was given the acronym CIRPAS," says Seinfeld, explaining that CIRPAS, the Center for Interdisciplinary Remotely Piloted Aircraft Studies, acknowledges the Navy's interest in making certain types of environmental research amenable for drone aircraft like the Predator.

"The Twin Otter is our principal aircraft, and it's very rugged and dependable," he adds. "It's the size of a small commuter aircraft, and it's mind-boggling how much instrumentation we can pack in this relatively small aircraft."

Caltech scientists used the plane in July to study the effects of particles on the marine strata off the California coast, and the plane has also been to the Canary Islands, Japan, Key West, Florida, and other places. In fact, the Twin Otter can essentially be taken anywhere in the world.

One hot area of research these days, pardon the term, is the interaction of particulate pollution with radiation from the sun. This is important for climate research, because, if one looks down from a high-flying jet on a smoggy day, it becomes clear that a lot of sunlight is bouncing back and never reaching the ground. Changing atmospheric conditions therefore affect Earth's heat balance.

"If you change properties of clouds, then you change the climatic conditions on Earth," Seinfeld says. "Clouds are a major component in the planet's energy balance."

Unlike the ER-2, in which instrumentation must be contained in a small space, the Twin Otter can accommodate onboard mass spectrometers and such for onboard direct logging and analysis of data. The data are streamed to the ground in real time, which means that the scientists can sit in the hangar and watch the data come in. Seinfeld himself is one of those on the ground, leaving the two scientist seats in the plane to those whose instruments may require in-flight attention.

"We typically fly below 10,000 feet because the plane is not pressurized. Most of the phenomena we want to study occur below this altitude," he says.

John Eiler, associate professor of geochemistry, is another user of the NASA Airborne Research Program, particularly the air samples returned by the ER-2. Eiler is especially interested these days in the global hydrogen budget, and how a hydrogen-fueled transportation infrastructure could someday impact the environment.

Eiler and Caltech professor of planetary science Yuk Yung, along with lead author Tracey Tromp and several others, issued a paper on the hydrogen economy in June that quickly became one of the most controversial Caltech research projects in recent memory. Using mathematical modeling, the group showed that the inevitable leakage of hydrogen in a hydrogen-fueled economy could impact the ozone layer.

More recently Eiler and another group of collaborators, using samples returned by the ER-2 and subject to mass spectroscopy, have reported further details on how hydrogen could impact the environment. Specifically, they capitalized on the ER-2's high-altitude capabilities to collect air samples in the only region of Earth where's it's simple and straightforward to infer the precise cascade of reactions involving hydrogen and methane.

Though it seems contradictory, the Eiler team's conclusion from stratospheric research was that the hydrogen-eating microbes in soils can take care of at least some of the hydrogen leaked by human activity.

"This study was made possible by data collection," Eiler says. "So it's still the case in atmospheric chemistry that there's no substitute for going up and getting samples."

Writer: 
RT
Writer: 
Exclude from News Hub: 
No

Pages

Subscribe to RSS - research_news