Caltech, SLAC, and LANL Set New Network Performance Marks

PHOENIX, Ariz.--Teams of physicists, computer scientists, and network engineers from Caltech, SLAC, LANL, CERN, Manchester, and Amsterdam joined forces at the Supercomputing 2003 (SC2003) Bandwidth Challenge and captured the Sustained Bandwidth Award for their demonstration of "Distributed particle physics analysis using ultra-high speed TCP on the Grid," with a record bandwidth mark of 23.2 gigabits per second (or 23.2 billion bits per second).

The demonstration served to preview future Grid systems on a global scale, where communities of hundreds to thousands of scientists around the world would be able to access, process, and analyze terabyte-sized data samples, drawn from data stores thousands of times larger. A new generation of Grid systems is being developed in the United States and Europe to meet these challenges, and to support the next generation of high-energy physics experiments that are now under construction at the CERN laboratory in Geneva.

The currently operating high-energy physics experiments at SLAC (Palo Alto, California), Fermilab (Batavia, Illinois), and BNL (Upton, New York) are facing qualitatively similar challenges.

During the Bandwidth Challenge, the teams used all three of the 10 gigabit/sec wide-area network links provided by Level 3 Communications and Nortel, connecting the SC2003 site to Los Angeles, and from there to the Abilene backbone of Internet2, the TeraGrid, and to Palo Alto using a link provided by CENIC and National LambdaRail. The bandwidth mark achieved was more than 500,000 times faster than an Internet user with a typical modem connection (43 kilobits per second). The amount of TCP data transferred during the 48-minute-long demonstration was over 6.6 terabytes (or 6.6 trillion bytes). Typical single-stream host-to-host TCP data rates achieved were 3.5 to 5 gigabits per second, approaching the single-stream bandwidth records set last month by Caltech and CERN.

The data, generated from servers at the Caltech Center for Advanced Computing Research (CACR), SLAC, and LANL booths on the SC2003 showroom floor at Phoenix, a cluster at the StarLight facility in Chicago as well as the TeraGrid node at Caltech, was sent to sites in four countries (USA, Switzerland, Netherlands, and Japan) on three continents. Participating sites in the winning effort were the Caltech/DataTAG and Amsterdam/SURFnet PoPs at Chicago (hosted by StarLight), the Caltech PoP at Los Angeles (hosted by CENIC), the SLAC PoP at Palo Alto, the CERN and the DataTAG backbone in Geneva, the University of Amsterdam and SURFnet in Amsterdam, the AMPATH PoP at Florida International University in Miami, and the KEK Laboratory in Tokyo. Support was provided by DOE, NSF, PPARC, Cisco Systems, Level 3, Nortel, Hewlett-Packard, Intel, and Foundry Networks.

The team showed the ability to use efficiently both dedicated and shared IP backbones. Peak traffic on the Los Angeles-Phoenix circuit, dedicated to this experiment, reached almost 10 gigabits per second utilizing more than 99 percent of the capacity. On the shared Abilene and TeraGrid circuits the experiment was able to share fairly over 85 percent of the available bandwidth. Snapshots of the maximum link utilizations during the demonstration showed 8.7 gigabits per second on the Abilene link and 9.6 gigabits per second on the TeraGrid link.

This performance would never have been achieved without the use of new TCP implementations because the widely deployed TCP RENO protocol performs poorly at gigabit-per-second speed. The primary TCP algorithm used was new FAST TCP stack developed at the Caltech Netlab. Additional streams were generated using HS-TCP, implemented at Manchester, and scalable TCP.

Harvey Newman, professor of physics at Caltech, said: "This was a milestone in our development of wide-area networks and of global data-intensive systems for science. Within the past year we have learned how to use shared networks up to the 10 gigabit-per-second range effectively. In the next round we will combine these developments with the dynamic building of optical paths across countries and oceans. This paves the way for more flexible, efficient sharing of data by scientists in many countries, and could be a key factor enabling the next round of physics discoveries at the high-energy frontier. There are also profound implications for integrating information sharing and on-demand audiovisual collaboration in our daily lives, with a scale and quality previously unimaginable."

Les Cottrell, assistant director of SLAC's computer services, said: "This demonstrates that commonly available standard commercial hardware and software, from vendors like Cisco, can effectively and fairly use and fill up today's high-speed Internet backbones, and sustain TCP flows of many gigabits per second on both dedicated and shared intracountry and transcontinental networks. As 10 gigabit-per-second Ethernet equipment follows the price reduction curve experienced by earlier lower-speed standards, this will enable the next generation of high-speed networking and will catalyze new data-intensive applications in fields such as high-energy physics, astronomy, global weather, bioinformatics, seismology, medicine, disaster recovery, and media distribution."

Wu-chun (Wu) Feng, team leader of research and development in Advanced Network Technology in the Advanced Computing Laboratory at LANL, noted: "The SC2003 Bandwidth Challenge provided an ideal venue to demonstrate how a multi-institutional and multi-vendor team can quickly come together to achieve a feat that would otherwise be unimaginable today. Through the collaborative efforts of Caltech, SLAC, LANL, CERN, Manchester, and Amsterdam, we have once again pushed the envelope of high-performance networking. Moore's law move over!"

"Cisco was very pleased to help support the SC2003 show infrastructure, SCINET," said Bob Aiken, director of engineering for academic research and technology initiatives at Cisco. "In addition, we also had the opportunity to work directly with the high-energy physics (HEP) research community at SLAC and Caltech in the United States, SURFnet in the Netherlands, CERN in Geneva, and KEK in Japan, to once again establish a new record for advanced network infrastructure performance.

"In addition to supporting network research on the scaling of TCP, Cisco also provided a wide variety of solutions, including Cisco Systems ONS 15540, Cisco ONS 15808, Cisco Catalyst 6500 Series, Cisco 7600 Series, and Cisco 12400 Series at the HEP sites in order for them to attain their goal. The Cisco next-generation 10 GE line cards deployed at SC2003 were part of the interconnect between the HEP sites of Caltech, SLAC, CERN, KEK/Japan, SURFnet, StarLight, and the CENIC network."

"Level 3 was pleased to support the SC2003 conference again this year," said Paul Fernes, director of business development for Level 3. "We've provided network services for this event for the past three years because we view the conference as a leading indicator of the next generation of scientific applications that distinguished researchers from all over the world are working diligently to unleash. Level 3 will continue to serve the advanced networking needs of the research and academic community, as we believe that we have a technologically superior broadband infrastructure that can help enable new scientific applications that are poised to significantly contribute to societies around the globe."

Cees de Laat, associate professor at the University of Amsterdam and organizer of the Global Lambda Integrated Facility (GLIF) Forum, added: "This world-scale experiment combined leading researchers, advanced optical networks, and network research sites to achieve this outstanding result. We were able to glimpse a yet-to-be explored network paradigm, where both shared and dedicated paths are exploited to map the data flows of big science onto a hybrid network infrastructure in the most cost-effective way. We need to develop a new knowledge base to use wavelength-based networks and Grids effectively, and projects such as UltraLight, TransLight, NetherLight, and UKLight, in which the team members are involved, have a central role to play in reaching this goal."

###

About Caltech: With an outstanding faculty, including four Nobel laureates, and such off-campus facilities as the Jet Propulsion Laboratory, Palomar Observatory, and the W. M. Keck Observatory, the California Institute of Technology is one of the world's major research centers. The Institute also conducts instruction in science and engineering for a student body of approximately 900 undergraduates and 1,000 graduate students who maintain a high level of scholarship and intellectual achievement. Caltech's 124-acre campus is situated in Pasadena, California, a city of 135,000 at the foot of the San Gabriel Mountains, approximately 30 miles inland from the Pacific Ocean and 10 miles northeast of the Los Angeles Civic Center. Caltech is an independent, privately supported university, and is not affiliated with either the University of California system or the California State Polytechnic universities. http://www.caltech.edu

About SLAC: The Stanford Linear Accelerator Center (SLAC) is one of the world 's leading research laboratories. Its mission is to design, construct, and operate state-of-the-art electron accelerators and related experimental facilities for use in high-energy physics and synchrotron radiation research. In the course of doing so, it has established the largest known database in the world, which grows at 1 terabyte per day. That, and its central role in the world of high-energy physics collaboration, places SLAC at the forefront of the international drive to optimize the worldwide, high-speed transfer of bulk data. http://www.slac.stanford.edu/

About LANL: Los Alamos National Laboratory is operated by the University of California for the National Nuclear Security Administration of the U.S. Department of Energy and works in partnership with NNSA's Sandia and Lawrence Livermore National Laboratories to support NNSA in its mission. Los Alamos enhances global security by ensuring the safety and reliability of the U.S. nuclear weapons stockpile, developing technical solutions to reduce the threat of weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and national security concerns. http://www.lanl.gov/

About Netlab: Netlab is the Networking Laboratory at Caltech led by Professor Steven Low, where FAST TCP has been developed. The group does research in the control and optimization of protocols and networks, and designs, analyzes, implements, and experiments with new algorithms and systems. http://netlab.caltech.edu/FAST/

About CERN: CERN, the European Organization for Nuclear Research, has its headquarters in Geneva. At present, its member states are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland, and the United Kingdom. Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission, and UNESCO have observer status. For more information, see http://www.cern.ch.

About DataTAG: The European DataTAG is a project co-funded by the European Union, the U.S. Department of Energy through Caltech, and the National Science Foundation. It is led by CERN together with four other partners. The project brings together the following European leading research agencies: Italy's Istituto Nazionale di Fisica Nucleare (INFN), France's Institut National de Recherche en Informatique et en Automatique (INRIA), the U.K.'s Particle Physics and Astronomy Research Council (PPARC), and the Netherlands' University of Amsterdam (UvA). The DataTAG project is very closely associated with the European Union DataGrid project, the largest Grid project in Europe also led by CERN. For more information, see http://www.datatag.org.

About StarLight: StarLight is an advanced optical infrastructure and proving ground for network services optimized for high-performance applications. Operational since summer 2001, StarLight is a 1 GE and 10 GE switch/router facility for high-performance access to participating networks and also offers true optical switching for wavelengths. StarLight is being developed by the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC), the International Center for Advanced Internet Research (iCAIR) at Northwestern University, and the Mathematics and Computer Science Division at Argonne National Laboratory, in partnership with Canada's CANARIE and the Netherlands' SURFnet. STAR TAP and StarLight are made possible by major funding from the U.S. National Science Foundation to UIC. StarLight is a service mark of the Board of Trustees of the University of Illinois. See www.startap.net/starlight.

About the University of Manchester: The University of Manchester, located in the United Kingdom, was first granted a Royal Charter in April 1880 as the Victoria University and became the first of the U.K.'s great civic universities. As a full-range university it now has more than 70 departments involved in teaching and research, with more than 2,000 academic staff. There are more than 18,000 full-time students, including 2,500 international students, from over 120 countries studying for undergraduate and postgraduate level degrees. The University of Manchester has a proud tradition of innovation and excellence which continues today. Some of the key scientific developments of the century have taken place here. In Manchester, Rutherford conducted the research which led to the splitting of the atom and the world's first stored-program electronic digital computer, built by Freddie Williams and Tom Kilburn, successfully executed its first program in June 1948. The departments of Physics, Computational Science, Computer Science and the Network Group together with the E-Science North West Centre research facility are very active in developing a wide range of e-science projects and Grid technologies. See www.man.ac.uk.

About National LambdaRail: National LambdaRail (NLR) is a major initiative of U.S. research universities and private sector technology companies to provide a national scale infrastructure for research and experimentation in networking technologies and applications. NLR puts the control, the power, and the promise of experimental network infrastructure in the hands of the nation's scientists and researchers. Visit http://www.nationallambdarail.org for more information.

About CENIC: CENIC is a not-for-profit corporation serving California Institute of Technology, California State University, Stanford University, University of California, University of Southern California, California Community Colleges, and the statewide K-12 school system. CENIC's mission is to facilitate and coordinate the development, deployment, and operation of a set of robust multi-tiered advanced network services for this research and education community. http://www.cenic.org

About University of Amsterdam: The Advanced Internet Research group of the University of Amsterdam's Faculty of Science researches new architectures and protocols for the Internet. It actively participates in worldwide standardization organizations Internet Engineering Task Force and the Global Grid Forum. The group conducts experiments with extremely high-speed network infrastructures. The Institute carries out groundbreaking research in the fields of security, authorization, authentication and accounting for grid environments. The Institute is developing a virtual laboratory based on grid technology for e-science applications. For more information see http://www.science.uva.nl/research/air>www.science.uva.nl/research/air.

Writer: 
Robert Tindol
Writer: 

The length of the gaze affects human preferences, new study shows

PASADENA, Calif.—Beauty may be in the eye of the beholder, but a new psychophysical study from the California Institute of Technology suggests that the length of the beholding is important, too.

In an article appearing in the December 2003 issue of the journal Nature Neuroscience, Caltech biology professor Shinsuke Shimojo and his colleagues report that human test subjects asked to choose between two faces will spend increasingly more time gazing at the face that they will eventually choose as the one more attractive. Also, test subjects will typically choose the face that has been preferentially shown for a longer time by the experimenter. In addition, the results show that the effect of gaze duration on preference also holds true for choices between abstract geometric figures.

The findings show that human preferences may be more fundamentally tied to "feedback" between the very act of gazing and the internal, cognitive prototype of attractiveness than was formerly assumed. Earlier work by other researchers has relied on the "attractiveness template," which assumes that an individual's ideal conception of beauty has somehow been imprinted on his or her brain due to early exposures to other people's faces, such as the mother.

In fact, Shimojo says, the new results come from experiments especially designed to minimize the influence of earlier biases and existing preferences. Even when images of faces have been computer-processed to eliminate possible biases due to ethnic origins and even such trivial factors as hairstyles, the results still show strongly that the gaze is subconsciously oriented toward the eventual choice. This holds true even more strongly when a test subject is asked to choose between two abstract geometric figures, suggesting that the slightly lower tendency to fix the gaze on the eventual choice of two faces is influenced by existing selection biases that cannot be totally controlled.

The findings in Nature Neuroscience comprise two experiments. The first was the choice of the more attractive face, in which all the test subjects were asked to rate the faces from 1 (very unattractive) to 7 (very attractive). The average rating for each face was then calculated so that faces in pairs could be matched in different ways.

In the "face-attractiveness-easy task" the faces were paired according to gender, race, and neutrality of facial expressions, but comprised a choice of a "very unattractive" face with a "very attractive" face. Five test subjects were then shown 19 face pairs and were asked to choose the face they preferred. A video camera recorded the movements of their eyes as they directed their attention from one face on the screen to the other.

The results showed that the likelihood of gaze of the test subjects started from chance (50 percent) but rose above 70 percent of their time gazing at the face till they chose that face.

Even more striking was the difference in gaze devoted to the "face-attractiveness-difficult task," in which 30 pairs of faces were matched according to the closeness in which they had been ranked for attractiveness. In this experiment, the test subjects spent up to 83 percent of their time gazing at the face they would choose immediately before their decision response, suggesting that the gaze is even more important when there is little difference in the features of stimuli themselves.

The test subjects were also asked to choose the least attractive face, as well as the rounder face, and the results also showed that the length of the gaze was an important indicator of the eventual choice. In addition, the subjects were asked to choose between abstract geometric shapes, and the length of gaze also correlated highly with the eventual choice.

The second experiment is "gaze manipulation," in which the faces are not shown simultaneously, but in sequences of varying duration on the two sides of the computer screen. In other words, one face was shown for a longer time (900 milliseconds) than the other face (300 milliseconds), and as a control, the faces were also shown to other subjects in the center of the screen in an alternating sequence.

The results show that the face shown for a longer time tends to be chosen at chance level (50 percent) with only two repetitions of the sequence, but about 59 percent of the time with 12 repetitions. This suggests that the duration of the gaze can influence the choice. However, this manipulation did not work in the control experiment without gaze shift, as mentioned above, indicating that it is not mere exposure time, but rather active gaze shift, that made the differences.

In sum, the results indicate that active orienting by gaze shift is wired into the brain and that humans use it all the time, albeit subconsciously, Shimojo says. One example is our preference for good eye contact with people whom we are engaging in conversation.

"If I look directly into your eyes, then glance at your ears, you can immediately tell that I've broken eye contact, even if we're some distance apart," Shimojo explains. "This shows that there are subtle clues to what's in the mind."

In addition to Shimojo, the other authors are Claudiu Simion, a graduate student in biology at Caltech; Christian Scheier, a former postdoctoral researcher in Shimojo's lab; and Eiko Shimojo, of the School of Human Studies/Psychology at Bunkyo Gakuin University in Japan. Shinsuke Shimojo and Claudiu Simion contributed equally to the work. 

Writer: 
Robert Tindol
Writer: 

Gamma-Ray Bursts, X-Ray Flashes, and Supernovae Not As Different As They Appear

PASADENA, Calif.—For the past several decades, astrophysicists have been puzzling over the origin of powerful but seemingly different explosions that light up the cosmos several times a day. A new study this week demonstrates that all three flavors of these cosmic explosions--gamma-ray bursts, X-ray flashes, and certain supernovae of type Ic--are in fact connected by their common explosive energy, suggesting that a single type of phenomenon, the explosion of a massive star, is the culprit. The main difference between them is the "escape route" used by the energy as it flees from the dying star and its newly born black hole.

In the November 13 issue of the journal Nature, Caltech graduate student Edo Berger and an international group of colleagues report that cosmic explosions have pretty much the same total energy, but this energy is divided up differently between fast and slow jets in each explosion. This insight was made possible by radio observations, carried out at the National Radio Astronomy Observatory's Very Large Array (VLA), and Caltech's Owens Valley Radio Observatory, of a gamma-ray burst that was localized by NASA's High Energy Transient Explorer (HETE) satellite on March 29 of this year.

The burst, which at 2.6 billion light-years is the closest classical gamma-ray burst ever detected, allowed Berger and the other team members to obtain unprecedented detail about the jets shooting out from the dying star. The burst was in the constellation Leo.

"By monitoring all the escape routes, we realized that the gamma rays were just a small part of the story for this burst," Berger says, referring to the nested jet of the burst of March 29, which had a thin core of weak gamma rays surrounded by a slow and massive envelope that produced copious radio waves.

"This stumped me," Berger adds, "because gamma-ray bursts are supposed to produce mainly gamma rays, not radio waves!"

Gamma-ray bursts, first detected accidentally decades ago by military satellites watching for nuclear tests on Earth and in space, occur about once a day. Until now it was generally assumed that the explosions are so titanic that the accelerated particles rushing out in antipodal jets always give off prodigious amounts of gamma radiation, sometimes for hundreds of seconds. On the other hand, the more numerous supernovae of type Ic in our local part of the universe seem to be weaker explosions that produce only slow particles. X-ray flashes were thought to occupy the middle ground.

"The insight gained from the burst of March 29 prompted us to examine previously studied cosmic explosions," says Berger. "In all cases we found that the total energy of the explosion is the same. This means that cosmic explosions are beasts with different faces but the same body."

According to Shri Kulkarni, MacArthur Professor of Astronomy and Planetary Science at Caltech and Berger's thesis supervisor, these findings are significant because they suggest that many more explosions may go undetected. "By relying on gamma rays or X rays to tell us when an explosion is taking place, we may be exposing only the tip of the cosmic explosion iceberg."

The mystery we need to confront at this point, Kulkarni adds, is why the energy in some explosions chooses a different escape route than in others.

At any rate, adds Dale Frail, an astronomer at the VLA and coauthor of the Nature manuscript, astrophysicists will almost certainly make progress in the near future. In a few months NASA will launch a gamma-ray detecting satellite known as Swift, which is expected to localize about 100 gamma-ray bursts each year. Even more importantly, the new satellite will relay very accurate positions of the bursts within one or two minutes of initial detection.

The article appearing in Nature is titled "A Common Origin for Cosmic Explosions Inferred from Calorimetry of GRB 030329." In addition to Berger, the lead author, and Kulkarni and Frail, the other authors are Guy Pooley, of Cambridge University's Mullard Radio Astronomy Observatory; Vince McIntyre and Robin Wark, both of the Australia Telescope National Facility; Re'em Sari, associate professor of astrophysics and planetary science at Caltech; Derek Fox, a postdoctoral scholar in astronomy at Caltech; Alicia Soderberg, a graduate student in astrophysics at Caltech; Sarah Yost, a postdoctoral scholar in physics at Caltech; and Paul Price, a postdoctoral scholar at the University of Hawaii's Institute for Astronomy.

Writer: 
Robert Tindol
Writer: 

Atmospheric scientists still acquire samples the old-fashioned way--by flying up and getting them

PASADENA, Calif.—Just as Ishmael always returned to the high seas for whales after spending time on land, an atmospheric researcher always returns to the air for new data.

All scientific disciplines depend on the direct collection of data on natural phenomena to one extent or another. But atmospheric scientists still find it especially important to do some empirical data-gathering, and the best way to get what they need is by taking up a plane and more or less opening a window.

At the California Institute of Technology, where atmospheric science is a major interest involving researchers in several disciplines, the collection of data is considered important enough to justify the maintenance of a specially equipped plane dedicated to the purpose. In addition to the low-altitude plane, several Caltech researchers who need higher-altitude data are also heavy users of the jet aircraft maintained by NASA for its Airborne Science Program--a longstanding but relatively unsung initiative with aircraft based at the Dryden Flight Research Center in California's Mojave Desert.

"The best thing about using aircraft instead of balloons is that you are assured of getting your instruments back in working order," says Paul Wennberg, professor of atmospheric chemistry and environmental engineering science. Wennberg, whose work has been often cited in policy debates about the human impact on the ozone layer, often relies on the NASA suborbital platforms (i.e., various piloted and drone aircraft operating at mid to high altitudes) to collect his data.

Wennberg's experiments typically ride on the high-flying ER-2, which is a revamped reconnaissance U-2. The plane has room for the pilot only, which means that the experimental equipment has to be hands-free and independent of constant technical attention. Recently, Wennberg's group has made measurements from a reconfigured DC-8 that has room for some 30 passengers, depending on the scientific payload, but the operating ceiling is some tens of thousands of feet lower than that of the ER-2.

"The airplane program has been the king for NASA in terms of discoveries," Wennberg says. "Atmospheric science, and certainly atmospheric chemistry, is still very much an observational field. The discoveries we've made have not been by modeling, but by consistent surprise when we've taken up instruments and collected measurements."

In his field of atmospheric chemistry, Wennberg says the three foundations are laboratory work, synthesis and modeling, and observational data--the latter being still the most important.

"You might have hoped we'd be at the place where we could go to the field as a confirmation of what we did back in the lab or with computer programs, but that's not true. We go to the field and see things we don't understand."

Wennberg sometimes worries about the public perception of the value of the Airborne Science Program because the launching of a conventional jet aircraft is by no means as glamorous or romantic as the blasting off of a rocket from Cape Canaveral. By contrast, his own data-collection would appear to most as bread-and-butter work involving a few tried-and-true jet airplanes.

"If you hear that the program uses 'old technology,' this refers to the planes themselves and not the instruments, which are state-of-the-art," he says. "The platforms may be old, but it's really a vacuous argument to say that the program is in any way old.

"I would argue that the NASA program is a very cost-effective way to go just about anywhere on Earth and get data."

Chris Miller, who is a mission manager for the Airborne Science Program at the Dryden Flight Research Center, can attest to the range and abilities of the DC-8 by merely pointing to his control station behind the pilot's cabin. On his wall are mounted literally dozens of travel stick-ons from places around the world where the DC-8 passengers have done research. Included are mementos from Hong Kong, Singapore, New Zealand, Australia, Japan, Thailand, and Greenland, to name a few.

"In addition to atmospheric chemistry, we also collect data for Earth imaging, oceanography, agriculture, disaster preparedness, and archaeology," says Miller. "There can be anywhere from two or three to 15 experiments on a plane, and each experiment can be one rack of equipment to half a dozen."

Wennberg and colleagues Fred Eisele of the National Center for Atmospheric Research and Rick Flagan, who is McCollum Professor of Chemical Engineering, have developed special instrumentation to ride on the ER-2. One of their new instruments is a selected-ion- chemical ionization mass spectrometer, which is used to study the composition of the atmospheric aerosols and the mechanisms that lead to its production.

Caltech's Nohl Professor and professor of chemical engineering, John Seinfeld, conducts an aircraft program that is a bit more down-to-earth, at least in the literal sense.

Seinfeld is considered perhaps the world's leading authority on atmospheric particles or so-called aerosols--that is, all the stuff in the air like sulfur compounds and various other pollutants not classifiable as a gas. Seinfeld and his associates study primarily atmospheric particles, their size, their composition, their optical properties, their effect on solar radiation, their effect on cloud formation, and ultimately their effect on Earth's climate.

"Professor Rick Flagan and I have been involved for a number of years in an aircraft program largely funded by the Office of Naval Research, and established jointly with the Naval Postgraduate School in Monterey. The joint program was given the acronym CIRPAS," says Seinfeld, explaining that CIRPAS, the Center for Interdisciplinary Remotely Piloted Aircraft Studies, acknowledges the Navy's interest in making certain types of environmental research amenable for drone aircraft like the Predator.

"The Twin Otter is our principal aircraft, and it's very rugged and dependable," he adds. "It's the size of a small commuter aircraft, and it's mind-boggling how much instrumentation we can pack in this relatively small aircraft."

Caltech scientists used the plane in July to study the effects of particles on the marine strata off the California coast, and the plane has also been to the Canary Islands, Japan, Key West, Florida, and other places. In fact, the Twin Otter can essentially be taken anywhere in the world.

One hot area of research these days, pardon the term, is the interaction of particulate pollution with radiation from the sun. This is important for climate research, because, if one looks down from a high-flying jet on a smoggy day, it becomes clear that a lot of sunlight is bouncing back and never reaching the ground. Changing atmospheric conditions therefore affect Earth's heat balance.

"If you change properties of clouds, then you change the climatic conditions on Earth," Seinfeld says. "Clouds are a major component in the planet's energy balance."

Unlike the ER-2, in which instrumentation must be contained in a small space, the Twin Otter can accommodate onboard mass spectrometers and such for onboard direct logging and analysis of data. The data are streamed to the ground in real time, which means that the scientists can sit in the hangar and watch the data come in. Seinfeld himself is one of those on the ground, leaving the two scientist seats in the plane to those whose instruments may require in-flight attention.

"We typically fly below 10,000 feet because the plane is not pressurized. Most of the phenomena we want to study occur below this altitude," he says.

John Eiler, associate professor of geochemistry, is another user of the NASA Airborne Research Program, particularly the air samples returned by the ER-2. Eiler is especially interested these days in the global hydrogen budget, and how a hydrogen-fueled transportation infrastructure could someday impact the environment.

Eiler and Caltech professor of planetary science Yuk Yung, along with lead author Tracey Tromp and several others, issued a paper on the hydrogen economy in June that quickly became one of the most controversial Caltech research projects in recent memory. Using mathematical modeling, the group showed that the inevitable leakage of hydrogen in a hydrogen-fueled economy could impact the ozone layer.

More recently Eiler and another group of collaborators, using samples returned by the ER-2 and subject to mass spectroscopy, have reported further details on how hydrogen could impact the environment. Specifically, they capitalized on the ER-2's high-altitude capabilities to collect air samples in the only region of Earth where's it's simple and straightforward to infer the precise cascade of reactions involving hydrogen and methane.

Though it seems contradictory, the Eiler team's conclusion from stratospheric research was that the hydrogen-eating microbes in soils can take care of at least some of the hydrogen leaked by human activity.

"This study was made possible by data collection," Eiler says. "So it's still the case in atmospheric chemistry that there's no substitute for going up and getting samples."

Writer: 
RT
Writer: 
Exclude from News Hub: 
No

Caltech, JPL researchers unveil details on new type of light detector based on superconductivity

PASADENA, Calif.—A new and improved way to measure light has been unveiled by physicists at the California Institute of Technology and the Jet Propulsion Laboratory. The technology exploits the strange but predictable characteristics of superconductivity, and has a number of properties that should lead to uses in a variety of fields, from medicine to astrophysics.

Reporting in the October 23 issue of Nature, Caltech physicist Jonas Zmuidzinas and his JPL colleagues outline the specifications of their superconducting detector. The device is cleverly designed to sidestep certain limitations imposed by nature to allow for very subtle and precise measurements of electromagnetic radiation, which includes visible light, radio signals, X-rays, and gamma rays, as well as infrared and ultraviolet frequencies.

At the heart of the detector is a strip of material that is cooled to such a low temperature that electrical current flows unimpeded—in other words, a superconductor. Scientists have known for some time that superconductors function as they do because of electrons in the material being linked together as "Cooper pairs" with a binding energy just right to allow current to flow with no resistance. If the material is heated above a certain temperature, the Cooper pairs are torn apart by thermal fluctuations, and the result is electrical resistance.

Zmuidzinas and his colleagues have designed their device to register the slight changes that occur when an incoming photon—the basic unit of electromagnetic radiation—interacts with the material and affects the Cooper pairs. The device can be made sensitive enough to detect individual photons, as well as their wavelengths (or color).

However, a steady current run through the superconducting material is not useful for measuring light, so the researchers have also figured out a way to measure the slight changes in the superconductor's properties caused by the breaking of Cooper pairs. By applying a high-frequency microwave field of about 10 gigahertz, a slight lag in the response due to the Cooper pairs can be measured. In fact, the individual frequencies of the photons can be measured very accurately with this method, which should provide a significant benefit to astrophysicists, as well as researchers in a number of other fields, Zmuidzinas says.

"In astrophysics, this will give you lots more information from every photon you detect," he explains. "There are single-pixel detectors in existence that have similar sensitivity, but our new detector allows for much bigger arrays, potentially with thousands of pixels."

Such detectors could provide a very accurate means of measuring the fine details of the cosmic microwave background radiation (CMB). The CMB is the relic of the intense light that filled the early universe, detectable today as an almost uniform glow of microwave radiation coming from all directions.

Measurements of the CMB are of tremendous interest in cosmology today because of extremely faint variations in the intensity of the radiation that form an intricate pattern over the entire sky. These patterns provide a unique image of the universe as it existed just 300 thousand years after the Big Bang, long before the first galaxies or stars formed. The intensity variations are so faint, however, that it has required decades of effort to develop detectors capable of mapping them.

It was not until 1992 that the first hints of the patterns imprinted in the CMB by structure in the early universe were detected by the COBE satellite. In 2000, using new detectors developed at Caltech and JPL, the BOOMERANG experiment led by Caltech physicist Andrew Lange produced the first resolved images of the these patterns. Other experiments, most notably the Cosmic Background Imager of Caltech astronomer Tony Readhead, have confirmed and extended these results to even higher resolution. The images obtained by these experiments have largely convinced the cosmology research community that the universe is geometrically flat and that the theory of rapid inflation proposed by MIT physicist Alan Guth is a reality.

Further progress will help provide even more detailed images of the CMB—ideally, so detailed that individual fluctuations could be matched to primordial galaxies—as well as other information, including empirical evidence to determine whether the CMB is polarized. The new detector invented by Zmuidzinas and Henry G. LeDuc, a co-author of the paper, could be the breakthrough needed for the new generation of technology to study the CMB.

In addition, the new superconducting detector could be used to scan the universe for dark matter, and in X-ray astronomy for better analysis of black holes and other highly energetic phenomena, in medical scanning, in environmental science, and even in archaeology.

Other Caltech faculty are beginning to investigate these additional applications for the new detector. Assistant professor of physics Sunil Golwala is targeting dark-matter detection, while associate professor of physics and astronomy Fiona Harrison is pursuing X-ray astronomy applications.

The lead author of the paper is Peter Day, who earned his doctorate at Caltech under the direction of condensed-matter physicist David Goodstein and is now a researcher at JPL. In addition to LeDuc, also a researcher at JPL and leader of the JPL superconducting device group, the other authors are Ben Mazin and Anastasios Vayonakis, both Caltech graduate students working in Zmuidzinas's lab.

The work has been supported in part by NASA's Aerospace Technology Enterprise, the JPL Director's Research and Development Fund, the Caltech President's Fund, and Caltech trustee Alex Lidow.

Writer: 
Robert Tindol
Writer: 

Old Caltech Telescope Yields New Science

PASADENA, Calif. - Meet Sarah Horst, throwback. The planetary science major, a senior at the California Institute of Technology, spent six months engaged in a bit of old-time telescope observing. The work led to some breakthrough research about Saturn's moon Titan, and indirectly led to funding for a new telescope at Caltech's Palomar Observatory.

Horst, 21, was looking for a part-time job in the summer of her sophomore year, and was hired by Mike Brown, an associate professor of planetary astronomy. Brown and graduate student Antonin Bouchez knew there had been previous evidence of "weather" on Titan in the form of clouds. But that evidence was elusive. "Someone would look one year and think they saw a cloud, then look the next year and not see a cloud," explains Brown. "What we were after was a way to look at Titan, night after night after night."

The problem, of course, is that all of the large telescopes like Keck are incredibly busy, booked by astronomers from around the world who use the precious time for their own line of research. So Brown and Bouchez knew that obtaining large amounts of time for a single project like this was not going to happen.

The solution: Use an old teaching telescope--the hoary 14-inch Celestron telescope located on top of Caltech's Robinson Lab--to do cutting edge science that couldn't be done at the largest telescopes in the world, in Hawaii.

Though the power of the Robinson telescope is weak, and light pollution from Pasadena strong, which prevents imaging the actual clouds, the light reflecting from clouds could be imaged (the more clouds, the more light that's reflected). All that was needed was someone who could come night after night and take multiple images.

Enter Horst, the self-described "lowly undergraduate." For months, Horst spent her evenings in Robinson. "I did the setup, which involved a wheel that contained four light filters," she explains. Each filter would capture a different wavelength of light. Software switched the filters; all she had to do, says Horst, was to orientate and focus the telescope.

Now, modern-day astronomers have it relatively easy when using their telescope time. Sure they're up all night, but they sit on a comfortable chair in a warm room, hot coffee close at hand, and do their observing through a computer monitor that's connected to a telescope.

Not Horst. She did it the old way, in discomfort. "A lot of times in December or January I'd go in late at night, and it would be freezing," says Horst, who runs the 800-meter for the Caltech track team. "I'd wrap myself up in blankets." Horst spent hours in the dark, since the old dome itself had to be dark. "I couldn't even study," she says, "although sometimes I tried to read by the light of the moon."

A software program written by Bouchez plotted the light intensity from each image on a graph. When a particular image looked promising, Bouchez contacted Brown. As a frequent user of the Keck Observatory, which is powerful enough to take an image of the actual clouds, Brown was able to call colleagues who were using the Keck that night and quickly convince them that something exciting was going on. "It only took about ten minutes to get a quick image of Titan," says Brown. "The funny part was having to explain to them that we knew there were clouds because we had seen the evidence in our 14-inch telescope in the middle of the L.A. basin."

The result was "Direct Detection of Variable Tropospheric Clouds Near Titan's South Pole," which appeared in the December 19 journal Nature. It included this acknowledgement: "We thank . . . S. Horst for many nights of monitoring Titan in the cold."

The paper has helped Brown obtain the funding to build a new 24-inch custom-built telescope. It will be placed in its own building atop Palomar Mountain, on the grounds of Caltech's existing observatory. It's also roboticized; Brown will control the scope from Pasadena via a computer program he has written.

He'll use it for further observation of Titan and for other imaging, as well, such as fast-moving comets. "Most astronomy is big," notes Brown; "big scopes looking at big, unchanging things, like galaxies. I like to look at changing things, which led to this telescope."

What really made this project unique, though, according to Brown, is the Robinson scope. "Sarah was able to do something with this little telescope in Pasadena that no one in the world, on any of their larger professional telescopes on high, dark mountaintops, had been able to do," he says. "Sometimes a good idea and stubbornness are better than the largest telescope in town."

For Horst, while the work wasn't intellectually challenging--"a trained monkey could have done it," she says with a laugh--it was, nonetheless, "a cool project. Everything here is so theoretical and tedious, and so classroom orientated. So in that way it was a nice experience and reminded me what real science was about."

Writer: 
MW
Exclude from News Hub: 
No

Infrared space-based astronomy on front burner for Caltech employees at the SIRTF Science Center

There may be no giant monitor in the control room that one normally sees in the movies about space exploration, but the new Space Infrared Telescope Facility (SIRTF) Science Center is an indication of how much has been achieved in the long quest to make space-based astrophysical observation an everyday affair.

The telescope is NASA's newest triumph in the ongoing quest to understand the universe by looking at it in the pristine view of outer space, above the atmosphere and moisture. Following the Hubble Space Telescope, the Compton Gamma-Ray Observatory, and the Chandra X-Ray Observatory, SIRTF is the fourth and final mission of NASA's Great Observatories Program--though by no means the last time NASA intends to launch an observatory into space.

Caltech's part in the program is the hosting of the center where observations will be chosen, the satellite's observations scheduled and programmed, and the scientific data processed and sent on to the scientists involved. In all, about 100 people are employed by the center, which has close ties with the Jet Propulsion Laboratory and the Infrared Processing Analysis Center (IPAC) next door.

The science center is located in an office building on the south side of the Caltech campus in Pasadena, and the third-floor control room affords the staff a rather nice view of a sunny Southern California boulevard with palm trees and the San Gabriel mountains.

Now trailing Earth in a solar orbit slightly larger than Earth's annual orbit around the sun-- and thus slightly slower--the telescope will soon begin returning images of the infrared sky as it has never been seen before. There are much larger infrared telescopes on the ground, but unlike SIRTF, they are hampered by water vapor and atmospheric distortions. On the other hand, there have been infrared instruments in space, but none as big as SIRTF.

With such an instrument, astrophysicists hope to gain new insights about star formation, the center of galaxies, objects now so far away that they can only be imaged in the infrared, brown dwarfs so cool that they do not emit visible light, and perhaps even the extremely dim large bodies that may exist hitherto undetected at the fringes of our own solar system. Also, the telescope will be able to see light coming through many of the gas and dust clouds that are impenetrable to regular visible light.

Tom Soifer, the Caltech physics professor who has directed the center since its inception in 1997, admits he was nervous watching SIRTF being launched from Cape Canaveral in late August, but says he is pleased with the way things have gone so far.

In fact, the infrared telescope has already returned its first picture, an image of a star field. In SIRTF's case, the image was part of the "aliveness test" to demonstrate that the three scientific instruments on board all successfully survived the launch and subsequent deployment in solar orbit.

Soifer points out that the image (available at the Web site listed at the bottom of the page) was taken just a week after the telescope was launched. And even though the first image is a star field with no particular scientific interest, the bright, red stars give an inkling of the visual displays that will be made available to the public during SIRTF's five-year mission. In fact, subsequent images will be even better, because SIRTF's telescope wasn't even fully cooled or in focus when the "aliveness" image was snapped.

The imagery of SIRTF should be of ongoing interest to the public, because the remarkable images returned over the last 13 years by the Hubble Space Telescope have been quite popular. Also, the SIRTF operations will have more than a passing resemblance to those of Hubble, Soifer explains.

"SIRTF is very similar to the Hubble operation," he says. "Any astronomer in the world can get observing time if their proposal is selected, and you get grant money to fund your research if you work in the United States.

"Education and public outreach are important aspects of the science center, and we're working closely with JPL on public affairs activities."

Like the Hubble Space Telescope, SIRTF will look at a huge variety of objects, both in our own solar system and far beyond. In addition to the aforementioned items of interest, SIRTF will also look at giant molecular clouds, some of which contain organic molecules, and will look for signatures of planet formation around other stars.

Unlike the Hubble, however, SIRTF will have a more defined mission--or at least a more defined lifespan. The instruments must be cooled to an extremely low temperature with on-board cryogenic liquid, and this means that the bulk of the observations will end when the liquid coolant runs out. Minimizing heat and thereby conserving coolant is the reason for the unique orbit, which keeps the telescope well away from Earth, and conservation of the refrigerant is also the reason for the meticulous computer-aided planning that goes into all operations. The less the telescope has to move and switch between its three main instruments, the longer the coolant will last.

Soifer expects the mission to last for about five years. There's a chance that a few very limited observations could still be done for an additional four years, but SIRTF definitely comprises a beginning, a middle, and an end.

After that, SIRTF will be some 80 to 90 million miles away from Earth, and the distance will continue to increase. After about 60 years, Earth will catch up with SIRTF, but there would hardly be an advantage in rejuvenating the telescope at that point.

"It wouldn't be worth it," Soifer says. "The technologies are all advancing, so it would be better to go with that generation's technology. So there's a very clear end.

"In fact, the end will be approximately when I get to retirement age," he adds, smiling. "I'm not really sure how I feel about that."

Writer: 
RT

"Bubbloy" the latest invention from Caltech materials scientists

First there was liquid metal, that wondrous substance from Bill Johnson's materials science lab at Caltech that is now used for golf clubs and tennis rackets. Now a couple of Johnson's enterprising grad students have come up with a new invention-liquid metal foam.

According to Chris Veazey, who is working on his doctorate in materials science, the new stuff is a bulk metallic glass that has the stiffness of metal but the springiness of a trampoline. "You can squish it and the metal will spring back," says Veazey, who has given the stuff the tentative name "bubbloy," a combination of "bubble" and "alloy."

Greg Welsh, the co-inventor and also a doctoral student in materials science at Caltech, adds that bubbloy is made possible by a process that foams the alloy so that tiny bubbles form. Preliminary results show that if the bubbles nearly touch, the substance will be especially springy.

"We think it might be especially useful for the crumple zone of a car," says Veazey. "It should make a car safer than one where the structures in the crumple zone are made of conventional metals."

Bubbloy is made of palladium, nickel, copper, and phosphorus. This particular alloy was already known as one of the best bulk metallic glasses around, but Veazey and Welsh's contribution was figuring out how to get the stuff to foam. Other researchers have previously figured out how to foam metals like titanium and aluminum, but bubbloy will have big advantages in the strength-to-weight ratio.

How good is good? Veazey and Welsh's preliminary castings result in bubbloy that is light enough to float in water, yet quite strong and elastic.

"To make it really well is a challenge," Welsh says.

Bubbloy is one of several advances that will be showcased at a September 15 conference at Caltech. The conference, titled "Materials at the Fore," is the third annual meeting of the Center for the Science and Engineering of Materials at Caltech.

The day-long conference will begin with check-in and a continental breakfast at the Beckman Institute Courtyard on the west side of campus. Opening remarks and an overview of the conference will be presented at 8:30 a.m. by center director Julia Kornfield, a professor of chemical engineering at Caltech.

Presentations will include "Nano-scale Mechanical Properties," by Subra Suresh of MIT; "Synthesis and Assembly of Biological Macromolecules: DNA and Beyond," by Steve Quake of Caltech; "Thermoelectric Devices," by Sossina Haile of Caltech, and others.

Attendance is free but requires registration, and reporters are welcome to cover any or all of the presentations. Complete information is available at the Web site http://www.csem.caltech.edu/annrev/index.html.

Reporters who would like to attend are asked to contact Caltech Media Relations in advance.

Writer: 
Robert Tindol
Writer: 

Atmospheric researchers present new findingson the natural hydrogen cycle

Two months after a pivotal study on the potential impact of a future hydrogen economy on the environment, further evidence is emerging on what would happen to new quantities of hydrogen released into the atmosphere through human activity.

In an article appearing in the August 21 issue of the journal Nature, a group of researchers from the California Institute of Technology and other institutions reports results of a study of the atmospheric chemical reactions that produce and destroy molecular hydrogen in the stratosphere. Based on these results, the report concludes that most of the hydrogen eliminated from the atmosphere goes into the ground, and therefore that the scientific community will need to turn its focus toward soil destruction of hydrogen in order to accurately predict whether human emissions will accumulate in the air. The researchers reached this conclusion through careful measurement of the abundance of a rare isotope of hydrogen known as deuterium. It has long been known that atmospheric molecular hydrogen is anomalously rich in deuterium, but it was unclear why. The only reasonable explanation seemed to be that atmospheric hydrogen is mostly destroyed by chemical reactions in the air, and that those reactions are relatively slow for deuterium-rich hydrogen, so it accumulates like salt in an evaporating pan of water.

If correct, this would mean that oxidizing atmospheric trace gases control the natural hydrogen cycle and that soils are relatively unimportant. The Caltech group discovered that one of the main natural sources of atmospheric hydrogen--the breakdown of methane--is actually responsible for the atmosphere's enrichment in deuterium. This result implies that reactions with atmospheric oxidants are relatively unimportant to the hydrogen cycle, and that uptake by soils is really in the driver's seat.

This issue is important because of the potential for a future hydrogen economy to leak hydrogen into the air--a scenario explored in the earlier study published in Science. Such leaks of hydrogen seem likely at present, and if they occur must either be mitigated by some natural processes that destroy hydrogen, or else the leaked hydrogen will accumulate in the atmosphere. If the latter, this hydrogen would inevitably find its way into the stratosphere and participate in chemical reactions that damage the ozone layer. The key to predicting how this chain of events will unfold is knowing what natural processes destroy hydrogen, and to what extent they might counteract increases in human emissions.

Hydrogen is a highly reactive element, but the question of when and where it reacts, and under what circumstances, is difficult to know precisely. This question is simplified in the stratosphere, where it's easier to single out and understand specific reactions. According to John Eiler, an assistant professor of geochemistry at the California Institute of Technology and an author of both the new paper and the June paper in Science, the new data were gathered from air samples gathered in the stratosphere with one of the high-flying ER-2 planes operated by the NASA Dryden Flight Research Center in the Mojave Desert.

The ER-2, a reconfigured U-2 spy plane, is part of NASA's Airborne Research Program and is crucial to atmospheric chemists interested in directly collecting stratospheric samples for air-quality research. The air samples that were collected in the ER-2 in various locales show that there is an extreme enrichment of deuterium in stratospheric hydrogen.

"We wanted to look at hydrogen in the stratosphere because it's easy to study the production of hydrogen from methane separate from other influences," Eiler explains. "It may seem odd to go to the stratosphere to understand what's happening in the ground, but this was the best way to get a global perspective on the importance of soils to the hydrogen cycle."

With precise information on the deuterium content of hydrogen formed from methane, the researchers were able to calculate that the soil uptake of hydrogen is as high as 80 percent. It is suspected that this hydrogen is used by soil-living microbes to carry on their biological functions, although the details of this process are poorly understood and have been the subject of only a few previous studies.

It seems likely that the hydrogen taken up by soils is relatively free of environmental consequences, but the question still remains how much more hydrogen the soil can consume. If future use of hydrogen in transportation results in a significant amount of leakage, then soil uptake must increase dramatically or it will be inadequate to cleanse the released hydrogen from the atmosphere, Eiler says.

"An analogy would be the discovery that trees and other plants get rid of some of the carbon dioxide that cars emit, but by no means all of it," he says. "So the question as we look toward a future hydrogen economy is whether the microbes will be able to eat the hydrogen fast enough."

The research was funded in part by the National Science Foundation. Bruce Doddridge, program director in the NSF's division of atmospheric science, said, "This carefully conducted research investigating the natural chemistry of sources and sinks affecting the abundance of molecular hydrogen in the troposphere results in the most accurate information to date, and appears to account for the tropospheric deuterium excess previously observed.

"A more accurate molecular hydrogen budget may have important implications as global fuel technology shifts its focus from fossil fuels to other sources," Doddridge added.

The lead author of the paper is Thom Rahn, a former postdoctoral scholar of Eiler's who is now affiliated with Los Alamos National Laboratory. The other authors are Paul Wennberg, a professor of atmospheric chemistry and environmental engineering science at Caltech; Kristie A. Boering and Michael McCarthy, both of UC Berkeley; Stanley Tyler of UC Irvine; and Sue Schauffler of the National Center for Atmospheric Research in Boulder, Colorado.

In addition to the NSF, other supporters of the research were the Davidow Fund and General Motors Corp., the David and Lucile Packard Foundation, the NASA Upper Atmosphere Research Program, and the National Center for Atmospheric Research.

Writer: 
Robert Tindol
Writer: 

Gravity Variations Predict Earthquake Behavior

PASADENA, Calif. — In trying to predict where earthquakes will occur, few people would think to look at Earth's gravity field. What does the force that causes objects to fall to the ground and the moon to orbit around the earth have to do with the unpredictable ground trembling of an earthquake?

Now, researchers at the California Institute of Technology have found that within subduction zones, the regions where one of the earth's plates slips below another, areas where the attraction due to gravity is relatively high are less likely to experience large earthquakes than areas where the gravitational force is relatively low.

The study, by Caltech graduate student Teh-Ru Alex Song and Associate Professor of Geophysics Mark Simons, will appear in the August 1 issue of the journal Science.

Until now, says Simons, researchers studying earthquake behavior generally took one of four approaches: 1) analyzing seismograms generated by earthquakes, 2) studying frictional properties of various types of rock in the laboratory or in the field, 3) measuring the slow accumulation of strain between earthquakes with survey techniques, and 4) large-scale dynamic models of earthquakes and tectonics.

Instead of using one of these approaches, Song and Simons considered variations in the gravity field as a predictor of seismic behavior.

A gravity anomaly occurs when gravity is stronger or weaker than the regional average. For example, a mountain or an especially dense rock would tend to increase the nearby gravity field, creating a positive anomaly. Likewise, a valley would tend to create a negative anomaly.

Song and Simons examined existing data from satellite-derived observations of the gravity field in subduction zones. Comparing variations in gravity along the trenches with earthquake data from two different catalogs going back 100 years, the team found that, within a given subduction zone, areas with negative gravity anomalies correlated with increased large earthquake activity. Areas with relatively high gravity anomalies experienced fewer large earthquakes.

In addition, most of the energy released in earthquakes was in areas of low gravity. The team looked at subduction zone earthquakes with magnitude greater than 7.5 since 1976. They found that of the total energy released in those earthquakes, 44 percent came from regions with the most strongly negative gravity anomalies, though these regions made up only 14 percent of the total area.

Song and Simons also compared the location of large earthquakes with the topography of the subduction zones, finding that areas of low topography (such as basins) also corresponded well to areas with low gravity and high seismic activity.

So why would gravity and topography be related to seismic activity?

One possible link is via the frictional behavior of the fault. When two plates rub up against each other, friction between the plates makes it harder for them to slide. If the friction is great enough, the plates will stick. Over long periods of time, as the stuck plates push against each other, they may deform, creating spatial variations in topography and gravity.

In addition to deforming the plates, friction causes stress to build up. When too much stress builds up, the plates will suddenly jump, releasing the strain in the sometimes violent shaking of an earthquake.

If there were no friction between the plates, they would just slide right by each other smoothly, without bending or building up the strain that eventually results in earthquakes.

So in subduction zones, areas under high stress are likely to have greater gravity and topography anomalies, and are also more likely to have earthquakes.

Though this account provides a basic explanation for a rather complicated and unintuitive phenomenon, it is a simplified view, and Song and Simons would like to do more work to refine the details of the relation between the gravity field and large earthquakes.

The gravity anomalies the team considered take a long time to build up, and change very little over timescales up to at least 1 million years. Short-term events such as earthquakes do change the gravity field as the earth's plates suddenly move, but those variations are small compared with the long-term anomalies, which are on the order of 4 x 10^-4 m/s^2.

Because topography and gravity variations persist over periods of time much longer than the typical time between earthquakes, 100 to 1,000 years, large earthquakes should be consistently absent from areas with large positive gravity anomalies, say Song and Simons.

"This study makes a strong connection between long-term tectonic behavior and short-term seismic activity," says Simons, "and thereby provides a class of new observations for understanding earthquake dynamics."

Though no one can tell when or where the next major earthquake will occur, Global Positioning System measurements can show where strain is accumulating. Simons hopes to use such measurements to test the prediction that areas with high gravity will have low strain, and vice versa. The team points out that although large earthquakes occur where gravity and topography are low, there are low-gravity areas in subduction zones with no seismic activity. Furthermore, the research concentrates on subduction zones, and so makes no predictions about other types of faults.

Nonetheless, within a subduction zone known to be earthquake-prone, Simons believes earthquakes are more likely to occur in low-gravity zones. High gravity areas do tend to have few earthquakes. So while the research does not offer a way to predict where earthquakes will happen, it can predict where they won't happen, says Simons.

MEDIA CONTACT: Ernie Tretkoff (626) 395-8733 tretkoff@caltech.edu

Visit the Caltech media relations web site: http://pr.caltech.edu/media

Writer: 
ET

Pages

Subscribe to RSS - research_news