Discovery of giant planar Hall effect could herald a generation of "spintronics" devices

A basic discovery in magnetic semiconductors could result in a new generation of devices for sensors and memory applications -- and perhaps, ultimately, quantum computation -- physicists from the California Institute of Technology and the University of California at Santa Barbara have announced.

The new phenomenon, called the giant planar Hall effect, has to do with what happens when the spins of current-carrying electrons are manipulated. For several years scientists have been engaged in exploiting electron spin for the creation of a new generation of electronic devices --hence the term "spintronics" -- and the Caltech-UCSB breakthrough offers a new route to realizing such devices.

The term "spintronics" is used instead of "electronics" because the technology is based on a new paradigm, says Caltech physics professor Michael Roukes. Rather than merely using an electric current to make them work, spintronic devices will also rely on the magnetic orientation (or spin) of the electrons themselves. "In regular semiconductors, the spin freedom of the electrical current carriers does not play a role," says Roukes. "But in the magnetic semiconductors we've studied, the spin polarization -- that is, the magnetism -- of electrical current carriers is highly ordered. Consequently, it can act as an important factor in determining the current flow in the electrical devices."

In the naturally unpolarized state, there is no particular order between one electron's spin and its neighbor's. If the spins are aligned, the result can be a change in resistance to current flow.

Such changes in resistance have long been known for metals, but the current research is the first time that semiconductor material has been constructed in such a way that spin-charge interaction is manifested as a very dramatic change in resistivity. The Caltech-UCSB team managed to accomplish this by carefully preparing a ferromagnetic semiconductor material made of gallium manganese arsenide (GaMnAs). The widely-used current technology employs sandwiched magnetic metal structures used for magnetic storage.

"You have much more freedom with semiconductors than metals for two reasons," Roukes explains. "First, semiconductor material can be made compatible with the mainstream of semiconductor electronics; and second, there are certain phenomena in semiconductors that have no analogies in metals."

Practical applications of spintronics will likely include new paradigms in information storage, due to the superiority of such semiconductor materials to the currently available dynamic random access memory (or DRAM) chips. This is because the semiconductor spintronics would be "nonvolatile," meaning that once the spins were aligned, the system would be as robust as a metal bar that has been permanently magnetized.

The spintronics semiconductors could also conceivably be used in magnetic logic to replace transistors as switches in certain applications. In other words, spin alignment would be used as a logic gate for faster circuits with lower energy usage.

Finally, the technology could possibly be improved so that the quantum states of the spins themselves might be used for logic gates in future quantum computers. Several research teams have quantum logic gates, but the setup is the size of an entire laboratory, rather than at chip scale, and therefore still unsuitable for device integration. By contrast, a spintronics-based device might be constructed as a solid-state system that could be integrated into microchips.

A full description of the Caltech-UCSB team's work appeared in the March 14 issue of Physical Review Letters [Tang et al, Vol 90, 107201 (2003)]. The article is available by subscription, but the main site can be accessed at This discovery is also featured in the "News and Views" section of the forthcoming issue of Nature Materials.

Contact: Robert Tindol (626) 395-3631


Science begins for LIGO in questto detect gravitational waves

Armed with one of the most advanced scientific instruments of all time, physicists are now watching the universe intently for the first evidence of gravitational waves. First predicted by Albert Einstein in 1916 as a consequence of the general theory of relativity, gravitational waves have never been detected directly.

In Einstein's theory, alterations in the shape of concentrations of mass (or energy) have the effect of warping space-time, thereby causing distortions that propagate through the universe at the speed of light. A new generation of detectors, led by the Laser Interferometer Gravitational-Wave Observatory (LIGO), is coming into operation and promises sensitivities that will be capable of detecting a variety of catastrophic events, such as the gravitational collapse of stars or the coalescence of compact binary systems.

The commissioning of LIGO and improvements in the sensitivity are coming very rapidly, as the final interferometer systems are implemented and the limiting noise sources are uncovered and mitigated. In fact, the commissioning has made such rapid progress that LIGO is already capable of performing some of the most sensitive searches ever undertaken for gravitational waves. A similar device in Hannover, Germany (a German–U.K. collaboration known as GEO) is also getting underway, and these instruments are being used together as the initial steps in building a worldwide network of gravitational-wave detectors.

The first data was taken during a 17-day data run in September 2002. That data has now been analyzed for the presence of gravitational waves, and results are being presented at the American Physical Society meeting in Philadelphia. No sources have yet been detected, but new limits on gravitational radiation from such sources as binary neutron star inspirals, selected pulsars in our galaxy and background radiation from the early universe, are reported.

Realistically, detections are not expected at the present sensitivities. A second data run is now underway with significantly better sensitivity, and further improvements are expected over the next couple of years.

As the initial LIGO interferometers start to put new limits on gravitational-wave signals, the LIGO Lab, the LIGO Scientific Collaboration, and international partners are proposing an advanced LIGO to improve the sensitivity by more than a factor of 10 beyond the goals of the present instrument. It is anticipated that this new instrument may see gravitational-wave sources as often as daily, with excellent signal strengths, allowing details of the waveforms to be read off and compared with theories of neutron stars, black holes, and other highly relativistic objects. The improvement of sensitivity will allow the one-year planned observation time of the initial LIGO to be equaled in a matter of hours. The National Science Foundation has supported LIGO, and collaboration between Caltech and MIT were responsible for its construction. A scientific community of more than 400 scientists from around the world are now involved in research at LIGO.


Caltech applied physicists invent waveguideto bypass diffraction limits for new optical devices

Four hundred years ago, a scientist could peer into one of the newfangled optical microscopes and see microorganisms, but nothing much smaller. Nowadays, a scientist can look in the latest generation of lens-based optical microscopes and also see, well, microorganisms, but nothing much smaller. The limiting factor has always been a fundamental property of the wave nature of light that fuzzes out images of objects much smaller than the wavelength of the light that illuminates those objects. This has hampered the ability to make and use optical devices smaller than the wavelength. But a new technological breakthrough at the California Institute of Technology could sidestep this longstanding barrier.

Caltech applied physicist Harry Atwater and his associates have announced their success in creating "the world's smallest waveguide, called a plasmon waveguide, for the transport of energy in nanoscale systems." In essence, they have created a sort of "light pipe" constructed of a chain-array of several dozen microscopic metal slivers that allows light to hop along the chain and circumvent the diffraction limit. With such technology, there is the clear possibility that optical components can be constructed for a huge number of technological applications in which the diffraction limit is troublesome.

"What this represents is a fundamentally new approach for optical devices in which diffraction is not a limit," says Atwater.

Because the era of nanoscale devices is rapidly approaching, Atwater says, the future bodes well for extremely tiny optical devices that, in theory, would be able to connect to molecules and someday even to individual atoms.

At present, the Atwater team's plasmon waveguide looks something like a standard glass microscope slide. Fabricated on the glass plate by means of electron beam lithography is a series of nanoparticles, each about 30 nanometers (30 billionths of a meter, in other words) in width, about 30 nanometers in height, and about 90 nanometers in length. These etched "rods" are arranged in a parallel series like railroad ties, with such a tiny space between them that light energy can move along with very little radiated loss.

Therefore, if light with a wavelength of 590 nanometers, for example, passes through the nanoparticles, the light is confined to the smaller dimensions of the nanoparticles themselves. The light energy then "hops" between the individual elements in a process known as dipole-dipole coupling, at a rate of propagation considerably slower than the speed of light in a vacuum.

In addition to their functionality as miniature optical waveguides, these structures are also sensitive to the presence of biomolecules. Thus, a virus or even a single molecule of nerve gas could conceivably be detected with an optical device designed for biowarfare sensing. The potential applications include electronic devices that could detect single molecules of a pathogen, for example.

The ultrasmall waveguide could also be used to optically interconnect to electronic devices, because individual transistors on a microchip are already too small to be seen in a conventional optical microscope.

A description of the device will appear in the April 2003 issue of the journal Nature Materials. The other Caltech authors of the paper were Stefan A. Maier, a former graduate student and now postdoctoral researcher at Caltech, who was responsible for the working device, and Pieter G. Kik, also a postdoctoral researcher. Other authors were Sheffer Meltzer, Elad Harel, Bruce E. Koel, and Ari A.G. Requicha, all from the University of Southern California.

The nanoparticle structures were fabricated at the Jet Propulsion Laboratory's facility for electron beam lithography, with the help of JPL employees Richard Muller, Paul Maker, and Pierre Echternach.

The research was sponsored by the Air Force Office of Scientific Research and was also supported in part by grants from the National Science Foundation and Caltech's Center for Science and Engineering of Materials.

Contact: Robert Tindol (626) 395-3631


Quick action by astronomers worldwide leadsto new insights on mysterious gamma-ray bursts

Scientists "arriving quickly on the scene" of an October 4 gamma-ray burst have announced that their rapid accumulation of data has provided new insights about this exotic astrophysical phenomenon. The researchers have seen, for the first time, ongoing energizing of the burst afterglow for more than half an hour after the initial explosion.

The findings support the "collapsar" model, in which the core of a star 15 times more massive than the sun collapses into a black hole. The black hole's spin, or magnetic fields, may be acting like a slingshot, flinging material into the surrounding debris.

The prompt observation—and by far the most detailed to date—was made possible by several ground- and space-based observatories operating in tandem. The blast was initially detected by NASA's High-Energy Transient Explorer (HETE) satellite, and follow-up observations were quickly undertaken using ground-based robotic telescopes and fast-thinking researchers around the globe. The results are reported in the March 20 issue of the journal Nature.

"If a gamma-ray burst is the birth cry of a black hole, then the HETE satellite has just allowed us into the delivery room," said Derek Fox, a postdoctoral researcher at the California Institute of Technology and lead author of the Nature paper. Fox discovered the afterglow, or glowing embers of the burst, using the Oschin 48-inch telescope located at Caltech's Palomar Observatory.

Gamma-ray bursts shine hundreds of times brighter than a supernova, or as bright as a million trillion suns. The mysterious bursts are common, yet random and fleeting. The gamma-ray portion of a burst typically lasts from a few milliseconds to a couple of minutes. An afterglow, caused by shock waves from the explosion sweeping up matter and ramming it into the region around the burst, can linger for much longer, releasing energy in X rays, visible light, and radio waves. It is from the studies of such afterglows that astronomers can hope to learn more about the origins and nature of these extreme cosmic explosions.

This gamma-ray burst, called GRB021004, appeared on October 4, 2002, at 8:06 a.m. EDT. Seconds after HETE detected the burst, an e-mail providing accurate coordinates was sent to observatories around the world, including Caltech's Palomar Observatory. Fox pinpointed the afterglow shortly afterward from images captured by the Oschin Telescope within minutes of the burst, and notified the astronomical community through a rapid e-mail system operated by NASA for the follow-up studies of gamma-ray bursts. Then the race was on, as scientists in California, across the Pacific, Australia, Asia, and Europe employed more than 50 telescopes to zoom in on the afterglow before the approaching sunrise.

At about the same time, the afterglow was detected by the Automated Response Telescope (ART) in Japan, a 20-centimeter instrument located in Wako, a Tokyo suburb, and operated by the Japanese research institute RIKEN. The ART started observing the region a mere 193 seconds after the burst, but it took a few days for these essential observations to be properly analyzed and distributed to the astronomical community.

Analysis of these rapid observations produced a surprise: fluctuations in brightness, which scientists interpreted as the evidence for a continued injection of energy into the afterglow, well after the burst occurred. According to Shri Kulkarni, who is the McArthur Professor of Astronomy and Planetary Science at Caltech, the newly observed energizing of the burst afterglow indicates that the power must have been provided by whatever object produced the gamma-ray burst itself.

"This ongoing energy shows that the explosion is not a simple, one-time event, but that the central source lives for a longer time," said Kulkarni, a co-author of the Nature paper. "This is bringing us closer to a full understanding of these remarkable cosmic flashes."

Added Fox, "In the past we used to be impressed by the energy release in gamma-rays alone. These explosions appear to be more energetic than meets the eye."

Later radio observations undertaken at the Very Large Array in New Mexico and other radio telescopes, including Caltech's Owens Valley Radio Observatory and the IRAM millimeter telescope in France, lend further support to the idea that the explosions continued increasing in energy. "Whatever monster created this burst just refused to die quietly," said D. A. Frail, co-author and a staff astronomer at the Very Large Array.

Fox and his colleagues relied on data from the RIKEN telescope, in Japan, and from the Palomar Oschin Telescope and its Near Earth Asteroid Tracking (NEAT) camera, an instrument that has been roboticized and is currently managed by a team of astronomers at JPL led by Steven Pravdo. The collaboration of the Caltech astronomers and the NEAT team has proven extremely fruitful for the global astronomical community, helping to identify fully 25 percent of the afterglows discovered worldwide since Fox retrofitted the telescope software for this new task in the autumn of 2001.

HETE is the first satellite to provide and distribute accurate burst locations within seconds. The principal investigator for the HETE satellite is George Ricker of the Massachussetts Institute of Technology. HETE was built as a "mission of opportunity" under the NASA Explorer Program, a collaboration among U.S. universities, Los Alamos National Laboratory, and scientists and organizations in Brazil, France, India, Italy, and Japan.


Contact: Robert Tindol (626) 395-3631


Caltech computer scientists develop FAST protocol to speed up Internet

Caltech computer scientists have developed a new data transfer protocol for the Internet fast enough to download a full-length DVD movie in less than five seconds.

The protocol is called FAST, standing for Fast Active queue management Scalable Transmission Control Protocol (TCP). The researchers have achieved a speed of 8,609 megabits per second (Mbps) by using 10 simultaneous flows of data over routed paths, the largest aggregate throughput ever accomplished in such a configuration. More importantly, the FAST protocol sustained this speed using standard packet size, stably over an extended period on shared networks in the presence of background traffic, making it adaptable for deployment on the world's high-speed production networks.

The experiment was performed last November during the Supercomputing Conference in Baltimore, by a team from Caltech and the Stanford Linear Accelerator Center (SLAC), working in partnership with the European Organization for Nuclear Research (CERN), and the organizations DataTAG, StarLight, TeraGrid, Cisco, and Level(3).

The FAST protocol was developed in Caltech's Networking Lab, led by Steven Low, associate professor of computer science and electrical engineering. It is based on theoretical work done in collaboration with John Doyle, a professor of control and dynamical systems, electrical engineering, and bioengineering at Caltech, and Fernando Paganini, associate professor of electrical engineering at UCLA. It builds on work from a growing community of theoreticians interested in building a theoretical foundation of the Internet, an effort in which Caltech has been playing a leading role.

Harvey Newman, a professor of physics at Caltech, said the fast protocol "represents a milestone for science, for grid systems, and for the Internet."

"Rapid and reliable data transport, at speeds of one to 10 Gbps and 100 Gbps in the future, is a key enabler of the global collaborations in physics and other fields," Newman said. "The ability to extract, transport, analyze and share many Terabyte-scale data collections is at the heart of the process of search and discovery for new scientific knowledge. The FAST results show that the high degree of transparency and performance of networks, assumed implicitly by Grid systems, can be achieved in practice. In a broader context, the fact that 10 Gbps wavelengths can be used efficiently to transport data at maximum speed end to end will transform the future concepts of the Internet."

Les Cottrell of SLAC, added that progress in speeding up data transfers over long distance are critical to progress in various scientific endeavors. "These include sciences such as high-energy physics and nuclear physics, astronomy, global weather predictions, biology, seismology, and fusion; and industries such as aerospace, medicine, and media distribution.

"Today, these activities often are forced to share their data using literally truck or plane loads of data," Cottrell said. "Utilizing the network can dramatically reduce the delays and automate today's labor intensive procedures."

The ability to demonstrate efficient high performance throughput using commercial off the shelf hardware and applications, standard Internet packet sizes supported throughput today's networks, and requiring modifications to the ubiquitous TCP protocol only at the data sender, is an important achievement.

With Internet speeds doubling roughly annually, we can expect the performances demonstrated by this collaboration to become commonly available in the next few years, so the demonstration is important to set expectations, for planning, and to indicate how to utilize such speeds.

The testbed used in the Caltech/SLAC experiment was the culmination of a multi-year effort, led by Caltech physicist Harvey Newman's group on behalf of the international high energy and nuclear physics (HENP) community, together with CERN, SLAC, Caltech Center for Advanced Computing Research (CACR), and other organizations. It illustrates the difficulty, ingenuity and importance of organizing and implementing leading edge global experiments. HENP is one of the principal drivers and co-developers of global research networks. One unique aspect of the HENP testbed is the close coupling between R&D and production, where the protocols and methods implemented in each R&D cycle are targeted, after a relatively short time delay, for widespread deployment across production networks to meet the demanding needs of data intensive science.

The congestion control algorithm of the current Internet was designed in 1988 when the Internet could barely carry a single uncompressed voice call. The problem today is that this algorithm cannot scale to anticipated future needs, when the networks will be compelled to carry millions of uncompressed voice calls on a single path or support major science experiments that require the on-demand rapid transport of gigabyte to terabyte data sets drawn from multi-petabyte data stores. This protocol problem has prompted several interim remedies, such as using nonstandard packet sizes or aggressive algorithms that can monopolize network resources to the detriment of other users. Despite years of effort, these measures have proved to be ineffective or difficult to deploy.

They are, however, critical steps in our evolution toward ultrascale networks. Sustaining high performance on a global network is extremely challenging and requires concerted advances in both hardware and protocols. Experiments that achieve high throughput either in isolated environments or using interim remedies that by-pass protocol instability, idealized or fragile as they may be, push the state of the art in hardware and demonstrates its performance limit. Development of robust and practical protocols will then allow us to make effective use of the most advanced hardware to achieve ideal performance in realistic environments.

The FAST team addresses the protocol issues head-on to develop a variant of TCP that can scale to a multi-gigabit-per-second regime in practical network conditions. The integrated approach that combines theory, implementation, and experiment is what makes their research unique and fundamental progress possible.

Using standard packet size that is supported throughout today's networks, the current TCP typically achieves an average throughput of 266 Mbps, averaged over an hour, with a single TCP/IP flow between Sunnyvale near SLAC and CERN in Geneva, over a distance of 10,037 kilometers. This represents an efficiency of just 27 percent. The FAST TCP sustained an average throughput of 925 Mbps and an efficiency of 95 percent, a 3.5-times improvement, under the same experimental condition. With 10 concurrent TCP/IP flows, FAST achieved an unprecedented speed of 8,609 Mbps, at 88 percent efficiency, that is 153,000 times that of today's modem and close to 6,000 times that of the common standard for ADSL (Asymmetric Digital Subscriber Line) connections.

The 10-flow experiment sets another first in addition to the highest aggregate speed over routed paths. It is the combination of high capacity and large distance that causes performance problems. Different TCP algorithms can be compared using the product of achieved throughput and the distance of transfer, measured in bit-meter-per-second, or bmps. The world record for the current TCP is 10 peta (1 followed by 16 zeros) bmps, using a nonstandard packet size. The Caltech/SLAC experiment transferred 21 terabytes over six hours between Baltimore and Sunnyvale using standard packet size, achieving 34 peta bmps. Moreover, data was transferred over shared research networks in the presence of background traffic, suggesting that FAST can be backward compatible with the current protocol. The FAST team has started to work with various groups around the world to explore testing and deploying FAST TCP in communities that need multi-Gbps networking urgently.

The demonstrations used a 10 Gbps link donated by Level(3) between StarLight (Chicago) and Sunnyvale, as well as the DataTAG 2.5 Gbps link between StarLight and CERN, the Abilene backbone of Internet2, and the TeraGrid facility. The network routers and switches at StarLight and CERN were used together with a GSR 12406 router loaned by Cisco at Sunnyvale, additional Cisco modules loaned at StarLight, and sets of dual Pentium 4 servers each with dual Gigabit Ethernet connections at StarLight, Sunnyvale, CERN, and the SC2002 show floor provided by Caltech, SLAC, and CERN. The project is funded by the National Science Foundation, the Department of Energy, the European Commission, and the Caltech Lee Center for Advanced Networking.

One of the drivers of these developments has been the HENP community, whose explorations at the high-energy frontier are breaking new ground in our understanding of the fundamental interactions, structures and symmetries that govern the nature of matter and space-time in our universe. The largest HENP projects each encompasses 2,000 physicists from 150 universities and laboratories in more than 30 countries.

Rapid and reliable data transport, at speeds of 1 to 10 Gbps and 100 Gbps in the future, is a key enabler of the global collaborations in physics and other fields. The ability to analyze and share many terabyte-scale data collections, accessed and transported in minutes, on the fly, rather than over hours or days as is the current practice, is at the heart of the process of search and discovery for new scientific knowledge. Caltech's FAST protocol shows that the high degree of transparency and performance of networks, assumed implicitly by Grid systems, can be achieved in practice.

This will drive scientific discovery and utilize the world's growing bandwidth capacity much more efficiently than has been possible until now.


Caltech applied physicists create ultrahigh-Q microcavity on a silicon chip

In an advance that holds promise for integrating previously disparate functions on a chip, applied physicists at the California Institute of Technology have created a disk smaller than the diameter of a human hair that can store light energy at extremely high efficiency. The disk, called a "microtoroid" because of its doughnut shape, can be integrated into microchips for a number of potential applications.

Reporting in the February 27, 2003, issue of the journal Nature, the Caltech team describes the optical resonator, which has a "Q factor," or quality factor, more than 10,000 times better than any previous chip-based device of similar function. Q is a figure-of-merit used to characterize resonators, approximately the number of oscillations of light within the storage time of the device.

The devices store optical energy by resonant recirculation at the exterior boundary of the toroid and achieve Q factors in excess of 100 million. In general, resonators whether mechanical, electronic, or optical have many applications. TV tuners and quartz crystals in a wristwatch are examples of resonators at radio frequencies; at optical frequencies, resonators are used in filters, sensors, and quantum optics.

Attaining ultrahigh-Q and fabricating the resonators on a chip have so far been mutually exclusive. Only rather exotic structures, like droplets or microspheres, have exhibited the atomically smooth surfaces needed for ultrahigh-Q. Due to a novel fabrication step, it is now possible to achieve both high Q and atomically smooth surfaces at the same time and to bring two worlds together.

The fabrication procedure uses lithography and etching techniques on a silicon wafer in a manner similar to process steps used for making microprocessors and memories. Thus, the resonators can be integrated with the circuitry of a chip, with lab-on-a-chip functions, or even with other optical components. Wafer-scale processing methods also enable their production in large quantities, an important feature in many applications, like biosensing, where low-cost, field deployable sensors are envisioned.

The microtoroids were fabricated in the lab of Kerry Vahala, who is Jenkins Professor of Information Science and Technology and professor of applied physics at Caltech. Vahala is co-inventor of the device, along with his graduate students Deniz Armani, Tobias Kippenberg, and Sean Spillane.

"This is the first time an optically resonant device with an ultrahigh-Q has been fabricated on a chip," says Vahala.

Vahala says his group is exploring ways to further increase the Q value of these devices as well as to further reduce their size. He believes Q values in excess of 1 billion in even more compact toroids will soon be possible. Last year, in the February 7, 2002, issue of Nature, the Vahala group reported an efficient nonlinear wavelength source using ultrahigh-Q resonators. His group is now investigating microchip-toroid versions of these nonlinear sources that may one day be used in communications systems.

The work was supported by Caltech's Lee Center for Advanced Networking and DARPA.

Contact: Robert Tindol (626) 395-3631


The Martian polar caps are almost entirelywater ice, Caltech research shows

For future Martian astronauts, finding a plentiful water supply may be as simple as grabbing an ice pick and getting to work. California Institute of Technology planetary scientists studying new satellite imagery think that the Martian polar ice caps are made almost entirely of water ice—with just a smattering of frozen carbon dioxide, or "dry ice," at the surface.

Reporting in the February 14 issue of the journal Science, Caltech planetary science professor Andy Ingersoll and his graduate student, Shane Byrne, present evidence that the decades-old model of the polar caps being made of dry ice is in error. The model dates back to 1966, when the first Mars spacecraft determined that the Martian atmosphere was largely carbon dioxide.

Scientists at the time argued that the ice caps themselves were solid dry ice and that the caps regulate the atmospheric pressure by evaporation and condensation. Later observations by the Viking spacecraft showed that the north polar cap contained water ice underneath its dry ice covering, but experts continued to believe that the south polar cap was made of dry ice.

However, recent high-resolution and thermal images from the Mars Global Surveyor and Mars Odyssey, respectively, show that the old model could not be accurate. The high-resolution images show flat-floored, circular pits eight meters deep and 200 to 1,000 meters in diameter at the south polar cap, and an outward growth rate of about one to three meters per year. Further, new infrared measurements from the newly arrived Mars Odyssey show that the lower material heats up, as water ice is expected to do in the Martian summer, and that the polar cap is too warm to be dry ice.

Based on this evidence, Byrne (the lead author) and Ingersoll conclude that the pitted layer is dry ice, but the material below, which makes up the floors of the pits and the bulk of the polar cap, is water ice.

This shows that the south polar cap is actually similar to the north pole, which was determined, on the basis of Viking data, to lose its one-meter covering of dry ice each summer, exposing the water ice underneath. The new results show that the difference between the two poles is that the south pole dry-ice cover is slightly thicker—about eight meters—and does not disappear entirely during the summertime.

Although the results show that future astronauts may not be obliged to haul their own water to the Red Planet, the news is paradoxically negative for the visionary plans often voiced for "terraforming" Mars in the distant future, Ingersoll says.

"Mars has all these flood and river channels, so one theory is that the planet was once warm and wet," Ingersoll says, explaining that a large amount of carbon dioxide in the atmosphere is thought to be the logical way to have a "greenhouse effect" that captures enough solar energy for liquid water to exist.

"If you wanted to make Mars warm and wet again, you'd need carbon dioxide, but there isn't nearly enough if the polar caps are made of water," Ingersoll adds. "Of course, terraforming Mars is wild stuff and is way in the future; but even then, there's the question of whether you'd have more than a tiny fraction of the carbon dioxide you'd need."

This is because the total mass of dry ice is only a few percent of the atmosphere's mass and thus is a poor regulator of atmospheric pressure, since it gets "used up" during warmer climates. For example, when Mars's spin axis is tipped closer to its orbit plane, which is analogous to a warm interglacial period on Earth, the dry ice evaporates entirely, but the atmospheric pressure remains almost unchanged.

The findings present a new scientific mystery to those who thought they had a good idea of how the atmospheres of the inner planets compared to each other. Planetary scientists have assumed that Earth, Venus, and Mars are similar in the total carbon dioxide content, with Earth having most of its carbon dioxide locked up in marine carbonates and Venus's carbon dioxide being in the atmosphere and causing the runaway greenhouse effect. By contrast, the eight-meter layer on the south polar ice cap on Mars means the planet has only a small fraction of the carbon dioxide found on Earth and Venus.

The new findings further pose the question of how Mars could have been warm and wet to begin with. Working backward, one would assume that there was once a sufficient amount of carbon dioxide in the atmosphere to trap enough solar energy to warm the planet, but there's simply not enough carbon dioxide for this to clearly have been the case.

"There could be other explanations," Byrne says. "It could be that Mars was a cold, wet planet; or it could be that the subterranean plumbing would allow for liquid water to be sealed off underneath the surface."

In one such scenario, perhaps the water flowed underneath a layer of ice and formed the channels and other erosion features. Then, perhaps, the ice sublimated away, to be eventually redeposited at the poles.

At any rate, Ingersoll and Byrne say that finding the missing carbon dioxide, or accounting for its absence, is now a major goal of Mars research.

Contact: Robert Tindol (626) 395-3631



Caltech, Italian Scientists Find Human Longevity Marker

"A very short one." Oldest known living person in 1995, Jeanne Calment, of France, then 120, when asked what sort of future she anticipated having. Quoted in Newsweek magazine, March 6, 1995.

PASADENA, Calif. – Even though Jeanne Louise Calment died in 1997 at the age of 122, we envy her longevity. Better, perhaps, to envy her mother's lineage, suggest scientists at the California Institute of Technology.

In a study of nonrelated people who have lived for a century or more, the researchers found that the centenarians had something in common: each was five times more likely than the general population to have the same mutation in their mitochondrial DNA (mtDNA).

That mutation, the researchers suggest, may provide a survival advantage by speeding mtDNA replication, thereby increasing its amount or replacing that portion of mtDNA which has been battered by the ravages of aging

The study was conducted by Jin Zhang, Jordi Asin Cayuela, and Yuichi Michikawa, postdoctoral scholars; Jennifer Fish, a research scientist; and Giuseppe Attardi, the Grace C. Steele Professor of Molecular Biology, all at Caltech, along with colleagues from the Universities of Bologna and Calabria in Italy, and the Italian National Research Center on Aging. It appears in the February 4 issue of the Proceedings of the National Academy of Sciences, and online at the PNAS website (

Mitochondrial DNA is the portion of the cell DNA that is located in mitochondria, the organelles which are the "powerhouses" of the cell. These organelles capture the energy released from the oxidation of metabolites and convert it into ATP, the energy currency of the cell. Mitochondrial DNA passes only from mother to offspring. Every human cell contains hundreds, or, more often, thousands of mtDNA molecules.

It's known that mtDNA has a high mutation rate. Such mutations can be harmful, beneficial, or neutral. In 1999, Attardi and other colleagues found what Attardi described as a "clear trend" in mtDNA mutations in individuals over the age of 65. In fact, in the skin cells the researchers examined, they found that up to 50 percent of the mtDNA molecules had been mutated.

Then, in another study two years ago, Attardi and colleagues found four centenarians who shared a genetic change in the so-called main control region of mtDNA. Because this region controls DNA replication, that observation raised the possibility that some mutations may extend life.

Now, by analyzing mtDNA isolated from a group of Italian centenarians, the researchers have found a common mutation in the same main control region. Looking at mtDNA in white blood cells of a group of 52 Italians between the ages of 99 and 106, they found that 17 percent had a specific mutation called the C150T transition. That frequency compares to only 3.4 percent of 117 people under the age of 99 who shared the same C150T mutation.

To probe whether the mutation is inherited, the team studied skin cells collected from the same individuals between 9 and 19 years apart. In some, both samples showed that the mutation already existed, while in others, it either appeared or became more abundant during the intervening years. These results suggest that some people inherit the mutation from their mother, while others acquire it during their lifetime.

Confirmation that the C150T mutation can be inherited was obtained by looking at mtDNA samples from 20 monozygotic (that is, derived from a single egg) twins and 18 dizygotic (from separate eggs) twins between 60 and 75 years of age. To their surprise, the investigators found that 30 percent of the monozygotic twins and 22 percent of the dizygotic twins shared the C150T mutation.

"The selection of the C150T mutation in centenarians suggests that it may promote survival," says Attardi. "Similarly, it may protect twins early in life from the effects of fetal growth restriction and the increased mortality associated with twin births.

"We found the mutation shifts the site at which mtDNA starts to replicate, and perhaps that may accelerate its replication, possibly, allowing the lucky individual to replace damaged molecules faster." Attardi says the study is the first to show a robust difference in an identified genetic marker between centenarians and younger folks. Their next goal, he says, is to find the exact physiological effect of this particular mutation.

The researchers who contributed to the paper in Italy were Massimiliano Bonafe, Fabiola Olivieri, Giuseppe Passarino, Giovanna De Benedictis, and Claudio Franceschi.

Contact: Mark Wheeler (626) 395-8733

Visit the Caltech Media Relations Website at



Nanodevice breaks 1-GHz barrier

Nanoscientists have achieved a milestone in their burgeoning field by creating a device that vibrates a billion times per second, or at one gigahertz (1 GHz). The accomplishment further increases the likelihood that tiny mechanical devices working at the quantum level can someday supplement electronic devices for new products.

Reporting in the January 30 issue of the journal Nature, California Institute of Technology professor of physics, applied physics, and bioengineering Michael Roukes and his colleagues from Caltech and Case Western Reserve University demonstrate that the tiny mechanism operates at microwave frequencies. The device is a prototype and not yet developed to the point that it is ready to be integrated into a commercial application; nevertheless, it demonstrates the progress being made in the quest to turn nanotechnology into a reality—that is, to make useful devices whose dimensions are less than a millionth of a meter.

This latest effort in the field of NEMS, which is an acronym for "nanoelectromechanical systems," is part of a larger, emerging effort to produce mechanical devices for sensitive force detection and high-frequency signal processing. According to Roukes, the technology could also have implications for new and improved biological imaging and, ultimately, for observing individual molecules through an improved approach to magnetic resonance spectroscopy, as well as for a new form of mass spectrometry that may permit single molecules to be "fingerprinted" by their mass.

"When we think of microelectronics today, we think about moving charges around on chips," says Roukes. "We can do this at high rates of speed, but in this electronic age our mind-set has been somewhat tyrannized in that we typically think of electronic devices as involving only the movement of charge.

"But since 1992, we've been trying to push mechanical devices to ever-smaller dimensions, because as you make things smaller, there's less inertia in getting them to move. So the time scales for inducing mechanical response go way down."

Though a good home computer these days can have a speed of one gigahertz or more, the quest to construct a mechanical device that can operate at such speeds has required multiple breakthroughs in manufacturing technology. In the case of the Roukes group's new demonstration, the use of silicon carbide epilayers to control layer thickness to atomic dimensions and a balanced high-frequency technique for sensing motion that effectively transfers signals to macroscale circuitry have been crucial to success. Both advances were pioneered in the Roukes lab.

Grown on silicon wafers, the films used in the work are prepared in such a way that the end-products are two nearly-identical beams 1.1 microns long, 120 nanometers wide and 75 nanometers thick. When driven by a microwave-frequency electric current while exposed to a strong magnetic field, the beams mechanically vibrate at slightly more than one gigahertz.

Future work will include improving the nanodevices to better link their mechanical function to real-world applications, Roukes says. The issue of communicating information, or measurements, from the nanoworld to the everyday world we live in is by no means a trivial matter. As devices become smaller, it becomes increasingly difficult to recognize the very small displacements that occur at much shorter time-scales.

Progress with the nanoelectromechanical system working at microwave frequencies offer the potential for improving magnetic resonance imaging to the extent that individual macromolecules could be imaged. This would be especially important in furthering the understanding of the relationship between, for example, the structure and function of proteins. Also, the devices could be used in a novel form of mass spectrometry, and for sensing individual biomolecules in fluids, and perhaps for realizing solid-state manifestations of the quantum bit that could be exploited for future devices such as quantum computers.

The coauthors of the paper are Xue-Ming (Henry) Huang, a graduate student in physics at Caltech; and Chris Zorman and Mehran Mehrengany, both engineering professors at Case Western Reserve University.

Contact:Robert Tindol (626) 395-3631


Research shows that shear force of blood flowis crucial to embryonic heart development

In a triumph of bioengineering, an interdisciplinary team of California Institute of Technology researchers has imaged the blood flow inside the heart of a growing embryonic zebrafish. The results demonstrate for the first time that the very action of high-velocity blood flowing over cardiac tissue is an important factor in the proper development of the heart—a result that could have profound implications for future surgical techniques and even for genetic engineering.

In the January 9, 2003 issue of the journal Nature, the investigators report on two interrelated advances in their work on Danio rerio, an animal reaching only two inches in length as an adult but a model of choice for research in genetic and developmental biology. First, the team was able to get very-high-resolution motion video, through the use of confocal microscopy, of the tiny beating hearts that are less than the diameter of a human hair. Second, by surgically blocking the flow of blood through the hearts, the researchers were able to demonstrate that a reduction in "shear stress," or the friction imposed by a flowing fluid on adjacent cells, will cause the growing heart to develop abnormally.

The result is especially important, says co-lead author Jay Hove, because it shows that more detailed studies of the effect of shear force might be exploited in the treatment of human heart disease. Because diseases such as congestive heart failure are known to cause the heart to enlarge due to constricted blood flow, a better understanding of the precise mechanisms of the blood flow could perhaps lead to advanced treatments to counteract the enlargement.

Also, Hove says, a better understanding of genetic factors involving blood flow in the heart—a future goal of the team's research—could eventually be exploited in the diagnosis of prenatal heart disease for early surgical correction, or even genetic intervention.

Hove, a bioengineer, along with Liepmann Professor of Aeronautics and Bioengineering Morteza Gharib, teamed with Scott Fraser, who is Rosen Professor of Biology, and Reinhardt Köster, a postdoctoral scholar in Fraser's lab, to study the heart development of zebrafish. Gharib, a specialist on fluid flow, has worked on heart circulation in the past, and Fraser is a leading authority on the imaging of cellular development in embryos. The new results are thus an interdisciplinary marriage of the fields of engineering, biology, and optics.

"Our research shows that the shape of the heart can be changed during the embryonic stage," says Hove. "The results invite us to consider whether this can be related to the roots of heart failure and heart disease."

The researchers keyed their efforts on the zebrafish because the one-millimeter eggs and the embryos inside them are nearly transparent. With the addition of a special chemical to further block the formation of pigment, the team was able to perform a noninvasive, in vivo "optical dissection." To do this, they used a technique known as confocal microscopy, which allows imaging of a layer of tissue. The images are two-dimensional, but they can be "stacked" for a three-dimensional reconstruction.

Concentrating on two groups of embryos—one group 36 hours after fertilization and the other at about four days—the researchers discovered that their deliberate interference with the blood flow through the use of carefully placed beads had a profound effect on heart development. When the shear force was reduced by 90 percent, the tiny hearts did not form valves properly, nor did they "loop," or form an outflow track properly.

Because the early development of an embryonic heart is thought to proceed through several nearly identical stages for all vertebrates, the researchers say the effect should also hold true for human embryos. In effect, the research demonstrates that the shear force should also be a fundamental influence on the formation of the various structures of the human heart.

The next step for the researchers is to attempt to regulate the restriction of shear force through new techniques to see how slight variations affect structural development, and to look at how gene expression is involved in embryonic heart development. " What we learn will give us directions to go and questions to ask about other vertebrates, particularly human beings," Hove says.

In addition to the lead authors Hove and Köster and professors Gharib and Fraser, the team also included Caltech students Arian S. Forouhar and Gabriel Acevedo-Bolton.

The paper is available on the Nature Web site at

Contact: Robert Tindol (626) 395-3631