Caltech computer scientists develop FAST protocol to speed up Internet

Caltech computer scientists have developed a new data transfer protocol for the Internet fast enough to download a full-length DVD movie in less than five seconds.

The protocol is called FAST, standing for Fast Active queue management Scalable Transmission Control Protocol (TCP). The researchers have achieved a speed of 8,609 megabits per second (Mbps) by using 10 simultaneous flows of data over routed paths, the largest aggregate throughput ever accomplished in such a configuration. More importantly, the FAST protocol sustained this speed using standard packet size, stably over an extended period on shared networks in the presence of background traffic, making it adaptable for deployment on the world's high-speed production networks.

The experiment was performed last November during the Supercomputing Conference in Baltimore, by a team from Caltech and the Stanford Linear Accelerator Center (SLAC), working in partnership with the European Organization for Nuclear Research (CERN), and the organizations DataTAG, StarLight, TeraGrid, Cisco, and Level(3).

The FAST protocol was developed in Caltech's Networking Lab, led by Steven Low, associate professor of computer science and electrical engineering. It is based on theoretical work done in collaboration with John Doyle, a professor of control and dynamical systems, electrical engineering, and bioengineering at Caltech, and Fernando Paganini, associate professor of electrical engineering at UCLA. It builds on work from a growing community of theoreticians interested in building a theoretical foundation of the Internet, an effort in which Caltech has been playing a leading role.

Harvey Newman, a professor of physics at Caltech, said the fast protocol "represents a milestone for science, for grid systems, and for the Internet."

"Rapid and reliable data transport, at speeds of one to 10 Gbps and 100 Gbps in the future, is a key enabler of the global collaborations in physics and other fields," Newman said. "The ability to extract, transport, analyze and share many Terabyte-scale data collections is at the heart of the process of search and discovery for new scientific knowledge. The FAST results show that the high degree of transparency and performance of networks, assumed implicitly by Grid systems, can be achieved in practice. In a broader context, the fact that 10 Gbps wavelengths can be used efficiently to transport data at maximum speed end to end will transform the future concepts of the Internet."

Les Cottrell of SLAC, added that progress in speeding up data transfers over long distance are critical to progress in various scientific endeavors. "These include sciences such as high-energy physics and nuclear physics, astronomy, global weather predictions, biology, seismology, and fusion; and industries such as aerospace, medicine, and media distribution.

"Today, these activities often are forced to share their data using literally truck or plane loads of data," Cottrell said. "Utilizing the network can dramatically reduce the delays and automate today's labor intensive procedures."

The ability to demonstrate efficient high performance throughput using commercial off the shelf hardware and applications, standard Internet packet sizes supported throughput today's networks, and requiring modifications to the ubiquitous TCP protocol only at the data sender, is an important achievement.

With Internet speeds doubling roughly annually, we can expect the performances demonstrated by this collaboration to become commonly available in the next few years, so the demonstration is important to set expectations, for planning, and to indicate how to utilize such speeds.

The testbed used in the Caltech/SLAC experiment was the culmination of a multi-year effort, led by Caltech physicist Harvey Newman's group on behalf of the international high energy and nuclear physics (HENP) community, together with CERN, SLAC, Caltech Center for Advanced Computing Research (CACR), and other organizations. It illustrates the difficulty, ingenuity and importance of organizing and implementing leading edge global experiments. HENP is one of the principal drivers and co-developers of global research networks. One unique aspect of the HENP testbed is the close coupling between R&D and production, where the protocols and methods implemented in each R&D cycle are targeted, after a relatively short time delay, for widespread deployment across production networks to meet the demanding needs of data intensive science.

The congestion control algorithm of the current Internet was designed in 1988 when the Internet could barely carry a single uncompressed voice call. The problem today is that this algorithm cannot scale to anticipated future needs, when the networks will be compelled to carry millions of uncompressed voice calls on a single path or support major science experiments that require the on-demand rapid transport of gigabyte to terabyte data sets drawn from multi-petabyte data stores. This protocol problem has prompted several interim remedies, such as using nonstandard packet sizes or aggressive algorithms that can monopolize network resources to the detriment of other users. Despite years of effort, these measures have proved to be ineffective or difficult to deploy.

They are, however, critical steps in our evolution toward ultrascale networks. Sustaining high performance on a global network is extremely challenging and requires concerted advances in both hardware and protocols. Experiments that achieve high throughput either in isolated environments or using interim remedies that by-pass protocol instability, idealized or fragile as they may be, push the state of the art in hardware and demonstrates its performance limit. Development of robust and practical protocols will then allow us to make effective use of the most advanced hardware to achieve ideal performance in realistic environments.

The FAST team addresses the protocol issues head-on to develop a variant of TCP that can scale to a multi-gigabit-per-second regime in practical network conditions. The integrated approach that combines theory, implementation, and experiment is what makes their research unique and fundamental progress possible.

Using standard packet size that is supported throughout today's networks, the current TCP typically achieves an average throughput of 266 Mbps, averaged over an hour, with a single TCP/IP flow between Sunnyvale near SLAC and CERN in Geneva, over a distance of 10,037 kilometers. This represents an efficiency of just 27 percent. The FAST TCP sustained an average throughput of 925 Mbps and an efficiency of 95 percent, a 3.5-times improvement, under the same experimental condition. With 10 concurrent TCP/IP flows, FAST achieved an unprecedented speed of 8,609 Mbps, at 88 percent efficiency, that is 153,000 times that of today's modem and close to 6,000 times that of the common standard for ADSL (Asymmetric Digital Subscriber Line) connections.

The 10-flow experiment sets another first in addition to the highest aggregate speed over routed paths. It is the combination of high capacity and large distance that causes performance problems. Different TCP algorithms can be compared using the product of achieved throughput and the distance of transfer, measured in bit-meter-per-second, or bmps. The world record for the current TCP is 10 peta (1 followed by 16 zeros) bmps, using a nonstandard packet size. The Caltech/SLAC experiment transferred 21 terabytes over six hours between Baltimore and Sunnyvale using standard packet size, achieving 34 peta bmps. Moreover, data was transferred over shared research networks in the presence of background traffic, suggesting that FAST can be backward compatible with the current protocol. The FAST team has started to work with various groups around the world to explore testing and deploying FAST TCP in communities that need multi-Gbps networking urgently.

The demonstrations used a 10 Gbps link donated by Level(3) between StarLight (Chicago) and Sunnyvale, as well as the DataTAG 2.5 Gbps link between StarLight and CERN, the Abilene backbone of Internet2, and the TeraGrid facility. The network routers and switches at StarLight and CERN were used together with a GSR 12406 router loaned by Cisco at Sunnyvale, additional Cisco modules loaned at StarLight, and sets of dual Pentium 4 servers each with dual Gigabit Ethernet connections at StarLight, Sunnyvale, CERN, and the SC2002 show floor provided by Caltech, SLAC, and CERN. The project is funded by the National Science Foundation, the Department of Energy, the European Commission, and the Caltech Lee Center for Advanced Networking.

One of the drivers of these developments has been the HENP community, whose explorations at the high-energy frontier are breaking new ground in our understanding of the fundamental interactions, structures and symmetries that govern the nature of matter and space-time in our universe. The largest HENP projects each encompasses 2,000 physicists from 150 universities and laboratories in more than 30 countries.

Rapid and reliable data transport, at speeds of 1 to 10 Gbps and 100 Gbps in the future, is a key enabler of the global collaborations in physics and other fields. The ability to analyze and share many terabyte-scale data collections, accessed and transported in minutes, on the fly, rather than over hours or days as is the current practice, is at the heart of the process of search and discovery for new scientific knowledge. Caltech's FAST protocol shows that the high degree of transparency and performance of networks, assumed implicitly by Grid systems, can be achieved in practice.

This will drive scientific discovery and utilize the world's growing bandwidth capacity much more efficiently than has been possible until now.


Caltech applied physicists create ultrahigh-Q microcavity on a silicon chip

In an advance that holds promise for integrating previously disparate functions on a chip, applied physicists at the California Institute of Technology have created a disk smaller than the diameter of a human hair that can store light energy at extremely high efficiency. The disk, called a "microtoroid" because of its doughnut shape, can be integrated into microchips for a number of potential applications.

Reporting in the February 27, 2003, issue of the journal Nature, the Caltech team describes the optical resonator, which has a "Q factor," or quality factor, more than 10,000 times better than any previous chip-based device of similar function. Q is a figure-of-merit used to characterize resonators, approximately the number of oscillations of light within the storage time of the device.

The devices store optical energy by resonant recirculation at the exterior boundary of the toroid and achieve Q factors in excess of 100 million. In general, resonators whether mechanical, electronic, or optical have many applications. TV tuners and quartz crystals in a wristwatch are examples of resonators at radio frequencies; at optical frequencies, resonators are used in filters, sensors, and quantum optics.

Attaining ultrahigh-Q and fabricating the resonators on a chip have so far been mutually exclusive. Only rather exotic structures, like droplets or microspheres, have exhibited the atomically smooth surfaces needed for ultrahigh-Q. Due to a novel fabrication step, it is now possible to achieve both high Q and atomically smooth surfaces at the same time and to bring two worlds together.

The fabrication procedure uses lithography and etching techniques on a silicon wafer in a manner similar to process steps used for making microprocessors and memories. Thus, the resonators can be integrated with the circuitry of a chip, with lab-on-a-chip functions, or even with other optical components. Wafer-scale processing methods also enable their production in large quantities, an important feature in many applications, like biosensing, where low-cost, field deployable sensors are envisioned.

The microtoroids were fabricated in the lab of Kerry Vahala, who is Jenkins Professor of Information Science and Technology and professor of applied physics at Caltech. Vahala is co-inventor of the device, along with his graduate students Deniz Armani, Tobias Kippenberg, and Sean Spillane.

"This is the first time an optically resonant device with an ultrahigh-Q has been fabricated on a chip," says Vahala.

Vahala says his group is exploring ways to further increase the Q value of these devices as well as to further reduce their size. He believes Q values in excess of 1 billion in even more compact toroids will soon be possible. Last year, in the February 7, 2002, issue of Nature, the Vahala group reported an efficient nonlinear wavelength source using ultrahigh-Q resonators. His group is now investigating microchip-toroid versions of these nonlinear sources that may one day be used in communications systems.

The work was supported by Caltech's Lee Center for Advanced Networking and DARPA.

Contact: Robert Tindol (626) 395-3631


The Martian polar caps are almost entirelywater ice, Caltech research shows

For future Martian astronauts, finding a plentiful water supply may be as simple as grabbing an ice pick and getting to work. California Institute of Technology planetary scientists studying new satellite imagery think that the Martian polar ice caps are made almost entirely of water ice—with just a smattering of frozen carbon dioxide, or "dry ice," at the surface.

Reporting in the February 14 issue of the journal Science, Caltech planetary science professor Andy Ingersoll and his graduate student, Shane Byrne, present evidence that the decades-old model of the polar caps being made of dry ice is in error. The model dates back to 1966, when the first Mars spacecraft determined that the Martian atmosphere was largely carbon dioxide.

Scientists at the time argued that the ice caps themselves were solid dry ice and that the caps regulate the atmospheric pressure by evaporation and condensation. Later observations by the Viking spacecraft showed that the north polar cap contained water ice underneath its dry ice covering, but experts continued to believe that the south polar cap was made of dry ice.

However, recent high-resolution and thermal images from the Mars Global Surveyor and Mars Odyssey, respectively, show that the old model could not be accurate. The high-resolution images show flat-floored, circular pits eight meters deep and 200 to 1,000 meters in diameter at the south polar cap, and an outward growth rate of about one to three meters per year. Further, new infrared measurements from the newly arrived Mars Odyssey show that the lower material heats up, as water ice is expected to do in the Martian summer, and that the polar cap is too warm to be dry ice.

Based on this evidence, Byrne (the lead author) and Ingersoll conclude that the pitted layer is dry ice, but the material below, which makes up the floors of the pits and the bulk of the polar cap, is water ice.

This shows that the south polar cap is actually similar to the north pole, which was determined, on the basis of Viking data, to lose its one-meter covering of dry ice each summer, exposing the water ice underneath. The new results show that the difference between the two poles is that the south pole dry-ice cover is slightly thicker—about eight meters—and does not disappear entirely during the summertime.

Although the results show that future astronauts may not be obliged to haul their own water to the Red Planet, the news is paradoxically negative for the visionary plans often voiced for "terraforming" Mars in the distant future, Ingersoll says.

"Mars has all these flood and river channels, so one theory is that the planet was once warm and wet," Ingersoll says, explaining that a large amount of carbon dioxide in the atmosphere is thought to be the logical way to have a "greenhouse effect" that captures enough solar energy for liquid water to exist.

"If you wanted to make Mars warm and wet again, you'd need carbon dioxide, but there isn't nearly enough if the polar caps are made of water," Ingersoll adds. "Of course, terraforming Mars is wild stuff and is way in the future; but even then, there's the question of whether you'd have more than a tiny fraction of the carbon dioxide you'd need."

This is because the total mass of dry ice is only a few percent of the atmosphere's mass and thus is a poor regulator of atmospheric pressure, since it gets "used up" during warmer climates. For example, when Mars's spin axis is tipped closer to its orbit plane, which is analogous to a warm interglacial period on Earth, the dry ice evaporates entirely, but the atmospheric pressure remains almost unchanged.

The findings present a new scientific mystery to those who thought they had a good idea of how the atmospheres of the inner planets compared to each other. Planetary scientists have assumed that Earth, Venus, and Mars are similar in the total carbon dioxide content, with Earth having most of its carbon dioxide locked up in marine carbonates and Venus's carbon dioxide being in the atmosphere and causing the runaway greenhouse effect. By contrast, the eight-meter layer on the south polar ice cap on Mars means the planet has only a small fraction of the carbon dioxide found on Earth and Venus.

The new findings further pose the question of how Mars could have been warm and wet to begin with. Working backward, one would assume that there was once a sufficient amount of carbon dioxide in the atmosphere to trap enough solar energy to warm the planet, but there's simply not enough carbon dioxide for this to clearly have been the case.

"There could be other explanations," Byrne says. "It could be that Mars was a cold, wet planet; or it could be that the subterranean plumbing would allow for liquid water to be sealed off underneath the surface."

In one such scenario, perhaps the water flowed underneath a layer of ice and formed the channels and other erosion features. Then, perhaps, the ice sublimated away, to be eventually redeposited at the poles.

At any rate, Ingersoll and Byrne say that finding the missing carbon dioxide, or accounting for its absence, is now a major goal of Mars research.

Contact: Robert Tindol (626) 395-3631



Caltech, Italian Scientists Find Human Longevity Marker

"A very short one." Oldest known living person in 1995, Jeanne Calment, of France, then 120, when asked what sort of future she anticipated having. Quoted in Newsweek magazine, March 6, 1995.

PASADENA, Calif. – Even though Jeanne Louise Calment died in 1997 at the age of 122, we envy her longevity. Better, perhaps, to envy her mother's lineage, suggest scientists at the California Institute of Technology.

In a study of nonrelated people who have lived for a century or more, the researchers found that the centenarians had something in common: each was five times more likely than the general population to have the same mutation in their mitochondrial DNA (mtDNA).

That mutation, the researchers suggest, may provide a survival advantage by speeding mtDNA replication, thereby increasing its amount or replacing that portion of mtDNA which has been battered by the ravages of aging

The study was conducted by Jin Zhang, Jordi Asin Cayuela, and Yuichi Michikawa, postdoctoral scholars; Jennifer Fish, a research scientist; and Giuseppe Attardi, the Grace C. Steele Professor of Molecular Biology, all at Caltech, along with colleagues from the Universities of Bologna and Calabria in Italy, and the Italian National Research Center on Aging. It appears in the February 4 issue of the Proceedings of the National Academy of Sciences, and online at the PNAS website (

Mitochondrial DNA is the portion of the cell DNA that is located in mitochondria, the organelles which are the "powerhouses" of the cell. These organelles capture the energy released from the oxidation of metabolites and convert it into ATP, the energy currency of the cell. Mitochondrial DNA passes only from mother to offspring. Every human cell contains hundreds, or, more often, thousands of mtDNA molecules.

It's known that mtDNA has a high mutation rate. Such mutations can be harmful, beneficial, or neutral. In 1999, Attardi and other colleagues found what Attardi described as a "clear trend" in mtDNA mutations in individuals over the age of 65. In fact, in the skin cells the researchers examined, they found that up to 50 percent of the mtDNA molecules had been mutated.

Then, in another study two years ago, Attardi and colleagues found four centenarians who shared a genetic change in the so-called main control region of mtDNA. Because this region controls DNA replication, that observation raised the possibility that some mutations may extend life.

Now, by analyzing mtDNA isolated from a group of Italian centenarians, the researchers have found a common mutation in the same main control region. Looking at mtDNA in white blood cells of a group of 52 Italians between the ages of 99 and 106, they found that 17 percent had a specific mutation called the C150T transition. That frequency compares to only 3.4 percent of 117 people under the age of 99 who shared the same C150T mutation.

To probe whether the mutation is inherited, the team studied skin cells collected from the same individuals between 9 and 19 years apart. In some, both samples showed that the mutation already existed, while in others, it either appeared or became more abundant during the intervening years. These results suggest that some people inherit the mutation from their mother, while others acquire it during their lifetime.

Confirmation that the C150T mutation can be inherited was obtained by looking at mtDNA samples from 20 monozygotic (that is, derived from a single egg) twins and 18 dizygotic (from separate eggs) twins between 60 and 75 years of age. To their surprise, the investigators found that 30 percent of the monozygotic twins and 22 percent of the dizygotic twins shared the C150T mutation.

"The selection of the C150T mutation in centenarians suggests that it may promote survival," says Attardi. "Similarly, it may protect twins early in life from the effects of fetal growth restriction and the increased mortality associated with twin births.

"We found the mutation shifts the site at which mtDNA starts to replicate, and perhaps that may accelerate its replication, possibly, allowing the lucky individual to replace damaged molecules faster." Attardi says the study is the first to show a robust difference in an identified genetic marker between centenarians and younger folks. Their next goal, he says, is to find the exact physiological effect of this particular mutation.

The researchers who contributed to the paper in Italy were Massimiliano Bonafe, Fabiola Olivieri, Giuseppe Passarino, Giovanna De Benedictis, and Claudio Franceschi.

Contact: Mark Wheeler (626) 395-8733

Visit the Caltech Media Relations Website at



Nanodevice breaks 1-GHz barrier

Nanoscientists have achieved a milestone in their burgeoning field by creating a device that vibrates a billion times per second, or at one gigahertz (1 GHz). The accomplishment further increases the likelihood that tiny mechanical devices working at the quantum level can someday supplement electronic devices for new products.

Reporting in the January 30 issue of the journal Nature, California Institute of Technology professor of physics, applied physics, and bioengineering Michael Roukes and his colleagues from Caltech and Case Western Reserve University demonstrate that the tiny mechanism operates at microwave frequencies. The device is a prototype and not yet developed to the point that it is ready to be integrated into a commercial application; nevertheless, it demonstrates the progress being made in the quest to turn nanotechnology into a reality—that is, to make useful devices whose dimensions are less than a millionth of a meter.

This latest effort in the field of NEMS, which is an acronym for "nanoelectromechanical systems," is part of a larger, emerging effort to produce mechanical devices for sensitive force detection and high-frequency signal processing. According to Roukes, the technology could also have implications for new and improved biological imaging and, ultimately, for observing individual molecules through an improved approach to magnetic resonance spectroscopy, as well as for a new form of mass spectrometry that may permit single molecules to be "fingerprinted" by their mass.

"When we think of microelectronics today, we think about moving charges around on chips," says Roukes. "We can do this at high rates of speed, but in this electronic age our mind-set has been somewhat tyrannized in that we typically think of electronic devices as involving only the movement of charge.

"But since 1992, we've been trying to push mechanical devices to ever-smaller dimensions, because as you make things smaller, there's less inertia in getting them to move. So the time scales for inducing mechanical response go way down."

Though a good home computer these days can have a speed of one gigahertz or more, the quest to construct a mechanical device that can operate at such speeds has required multiple breakthroughs in manufacturing technology. In the case of the Roukes group's new demonstration, the use of silicon carbide epilayers to control layer thickness to atomic dimensions and a balanced high-frequency technique for sensing motion that effectively transfers signals to macroscale circuitry have been crucial to success. Both advances were pioneered in the Roukes lab.

Grown on silicon wafers, the films used in the work are prepared in such a way that the end-products are two nearly-identical beams 1.1 microns long, 120 nanometers wide and 75 nanometers thick. When driven by a microwave-frequency electric current while exposed to a strong magnetic field, the beams mechanically vibrate at slightly more than one gigahertz.

Future work will include improving the nanodevices to better link their mechanical function to real-world applications, Roukes says. The issue of communicating information, or measurements, from the nanoworld to the everyday world we live in is by no means a trivial matter. As devices become smaller, it becomes increasingly difficult to recognize the very small displacements that occur at much shorter time-scales.

Progress with the nanoelectromechanical system working at microwave frequencies offer the potential for improving magnetic resonance imaging to the extent that individual macromolecules could be imaged. This would be especially important in furthering the understanding of the relationship between, for example, the structure and function of proteins. Also, the devices could be used in a novel form of mass spectrometry, and for sensing individual biomolecules in fluids, and perhaps for realizing solid-state manifestations of the quantum bit that could be exploited for future devices such as quantum computers.

The coauthors of the paper are Xue-Ming (Henry) Huang, a graduate student in physics at Caltech; and Chris Zorman and Mehran Mehrengany, both engineering professors at Case Western Reserve University.

Contact:Robert Tindol (626) 395-3631


Research shows that shear force of blood flowis crucial to embryonic heart development

In a triumph of bioengineering, an interdisciplinary team of California Institute of Technology researchers has imaged the blood flow inside the heart of a growing embryonic zebrafish. The results demonstrate for the first time that the very action of high-velocity blood flowing over cardiac tissue is an important factor in the proper development of the heart—a result that could have profound implications for future surgical techniques and even for genetic engineering.

In the January 9, 2003 issue of the journal Nature, the investigators report on two interrelated advances in their work on Danio rerio, an animal reaching only two inches in length as an adult but a model of choice for research in genetic and developmental biology. First, the team was able to get very-high-resolution motion video, through the use of confocal microscopy, of the tiny beating hearts that are less than the diameter of a human hair. Second, by surgically blocking the flow of blood through the hearts, the researchers were able to demonstrate that a reduction in "shear stress," or the friction imposed by a flowing fluid on adjacent cells, will cause the growing heart to develop abnormally.

The result is especially important, says co-lead author Jay Hove, because it shows that more detailed studies of the effect of shear force might be exploited in the treatment of human heart disease. Because diseases such as congestive heart failure are known to cause the heart to enlarge due to constricted blood flow, a better understanding of the precise mechanisms of the blood flow could perhaps lead to advanced treatments to counteract the enlargement.

Also, Hove says, a better understanding of genetic factors involving blood flow in the heart—a future goal of the team's research—could eventually be exploited in the diagnosis of prenatal heart disease for early surgical correction, or even genetic intervention.

Hove, a bioengineer, along with Liepmann Professor of Aeronautics and Bioengineering Morteza Gharib, teamed with Scott Fraser, who is Rosen Professor of Biology, and Reinhardt Köster, a postdoctoral scholar in Fraser's lab, to study the heart development of zebrafish. Gharib, a specialist on fluid flow, has worked on heart circulation in the past, and Fraser is a leading authority on the imaging of cellular development in embryos. The new results are thus an interdisciplinary marriage of the fields of engineering, biology, and optics.

"Our research shows that the shape of the heart can be changed during the embryonic stage," says Hove. "The results invite us to consider whether this can be related to the roots of heart failure and heart disease."

The researchers keyed their efforts on the zebrafish because the one-millimeter eggs and the embryos inside them are nearly transparent. With the addition of a special chemical to further block the formation of pigment, the team was able to perform a noninvasive, in vivo "optical dissection." To do this, they used a technique known as confocal microscopy, which allows imaging of a layer of tissue. The images are two-dimensional, but they can be "stacked" for a three-dimensional reconstruction.

Concentrating on two groups of embryos—one group 36 hours after fertilization and the other at about four days—the researchers discovered that their deliberate interference with the blood flow through the use of carefully placed beads had a profound effect on heart development. When the shear force was reduced by 90 percent, the tiny hearts did not form valves properly, nor did they "loop," or form an outflow track properly.

Because the early development of an embryonic heart is thought to proceed through several nearly identical stages for all vertebrates, the researchers say the effect should also hold true for human embryos. In effect, the research demonstrates that the shear force should also be a fundamental influence on the formation of the various structures of the human heart.

The next step for the researchers is to attempt to regulate the restriction of shear force through new techniques to see how slight variations affect structural development, and to look at how gene expression is involved in embryonic heart development. " What we learn will give us directions to go and questions to ask about other vertebrates, particularly human beings," Hove says.

In addition to the lead authors Hove and Köster and professors Gharib and Fraser, the team also included Caltech students Arian S. Forouhar and Gabriel Acevedo-Bolton.

The paper is available on the Nature Web site at

Contact: Robert Tindol (626) 395-3631


Caltech, UCLA Researchers Create a New Gene Therapy for Treatment of HIV

PASADENA, Calif.— California Institute of Technology and UCLA researchers have developed a new gene therapy that is highly effective in preventing the HIV virus from infecting individual cells in the immune system. The technique, while not curative, could be used as a significant new treatment for people already infected by reducing the HIV-infected cells in their bodies.

Also, the new approach could be used to fight other diseases resulting from gene malfunctions, including cancer.

Reporting in the current issue of the Proceeding s of the National Academy of Sciences (PNAS), Caltech biologist David Baltimore and his UCLA collaborators announce that the new technique works by using a disabled version of the AIDS virus as a sort of "Trojan horse" to get a disruptive agent inside the human T-cells, thereby reducing the likelihood that a potent HIV virus will be able to successfully invade the cell. Early laboratory results show that more than 80 percent of the T-cells may be protected.

"To penetrate a cell, HIV needs two receptors that operate like doorknobs and allow the virus inside," says Baltimore, who is president of Caltech. "HIV grabs the receptor and forces itself into the cell. If we can knock out one of these receptors, we hope to prevent HIV from infecting the cell."

The receptors in question are called the CCR5 and the CD4. The human immune system can't get along without the CD4, but about 1 percent of the Caucasian population is born without the CCR5. In fact, these people are known to have a natural immunity to AIDS.

Therefore, the researchers' strategy was to disrupt the CCR5 receptor. They did this by introducing a special double-stranded RNA known as "small interfering RNA," or siRNA, into the T-cell. To do so, they engineered a disabled HIV virus to carry the siRNA into the T-cell. Thus, the T-cell was invaded, but the disabled virus has no ability to cause disease. Once inside the T-cell, the siRNA knocks out the CCR5 receptor.

Laboratory results show that human T-cells thus protected are then quite resistant to infection by the HIV virus. When the T-cells were put in a petri dish and exposed to HIV, less than 20 percent of the cells were actually infected.

"Synthetic siRNAs are powerful tools," says Irvin S.Y. Chen, one of the authors of the paper and director of the UCLA AIDS Institute. "But scientists have been baffled at how to insert them into the immune system in stable form. You can't just sprinkle them on the cells."

The other two authors of the paper are Xiao-Feng Qin, a postdoctoral researcher at Caltech; and Dong Sung An, a postdoctoral researcher at UCLA. The two contributed equally to the work.

The technique should become a significant new means of treating people already infected with HIV, Baltimore and Chen say.

"Our findings raise the hope that we can use this approach or combine it with drugs to treat HIV in people—particularly in persons who have not experienced good results with other forms of treatment," says Baltimore.

The technique can also potentially be used for other diseases when a specific gene needs to be knocked out, such as the malfunctioning genes associated with cancer, Chen says. "We can easily make siRNAs and use the carrier to deliver them into different cell types to turn off a gene malfunction," he says.

In addition, the technique could be used to prevent certain microorganisms from invading the body, Baltimore adds.

The research is supported by the National Institute of Allergy and Infectious Diseases and the Damon Runyon-Walter Winchell Fellowship.

[Note to editors: UCLA is also issuing a news release on this research. Contact Elaine Schmidt at (310) 794-2272;]

Robert Tindol

Clouds discovered on Saturn's moon Titan

Teams of astronomers at the California Institute of Technology and at the University of California, Berkeley, have discovered methane clouds near the south pole of Titan, resolving a fierce debate about whether clouds exist amid the haze of the moon's atmosphere.

The new observations were made using the W. M. Keck II 10-meter and the Gemini North 8-meter telescopes atop Hawaii's Mauna Kea volcano in December 2001. Both telescopes are outfitted with adaptive optics that provide unprecedented detail of features not seen even by the Voyager spacecraft during its flyby of Saturn and Titan.

The results are being published by the Caltech team in the December 19 issue of Nature and by the UC Berkeley and NASA Ames team in the December 20 issue of the Astrophysical Journal.

Titan is Saturn's largest moon, larger than the planet Mercury, and is the only moon in our solar system with a thick atmosphere. Like Earth's atmosphere, the atmosphere on Titan is mostly nitrogen. Unlike Earth, Titan is inhospitable to life due to the lack of atmospheric oxygen and its extremely cold surface temperatures (-183 degrees Celsius, or -297 degrees Fahrenheit). Along with nitrogen, Titan's atmosphere contains a significant amount of methane.

Earlier spectroscopic observations hinted at the existence of clouds on Titan, but gave no clue as to their location. These early data were hotly debated, since Voyager spacecraft measurements of Titan appeared to show a calm and cloud-free atmosphere. Furthermore, previous images of Titan had failed to reveal clouds, finding only unchanging surface markings and very gradual seasonal changes in the haziness of the atmosphere.

Improvements in the resolution and sensitivity achievable with ground-based telescopes led to the present discovery. The observations used adaptive optics, in which a flexible mirror rapidly compensates for the distortions caused by turbulence in Earth's atmosphere. These distortions are what cause the well-known twinkling of the stars. Using adaptive optics, details as small as 300 kilometers across can be distinguished at the enormous distance of Titan (1.3 billion kilometers), equivalent of reading an automobile license plate from 100 kilometers away.

The images presented by the two teams clearly show bright clouds near Titan's south pole.

"We see the intensity of the clouds varying over as little as a few hours," said post-doctoral fellow Henry Roe, lead author for the UC Berkeley group. "The clouds are constantly changing, although some persist for as long as a few days."

Titan experiences seasons much like Earth, though its year is 30 times longer due to Saturn's distant orbit from the sun. Titan is currently in the midst of southern summer, and the south pole has been in continuous sunlight for over six Earth years. The researchers believe that this fact may explain the location of the large clouds.

"These clouds appear to be similar to summer thunderstorms on Earth, but formed of methane rather than water. This is the first time we have found such a close analogy to the Earth's atmospheric water cycle in the solar system," says Antonin Bouchez, one of the Caltech researchers.

In addition to the clouds above Titan's south pole, the Keck images, like previous data, reveal the bright continent-sized feature that may be a large icy highland on Titan's surface, surrounded by linked dark regions that are possibly ethane seas or tar-covered lowlands.

"These are the most spectacular images of Titan's surface which we've seen to date," says Michael Brown, associate professor of planetary astronomy and lead author of the Caltech paper. "They are so detailed that we can almost begin to speculate about Titan's geology, if only we knew for certain what the bright and dark regions represented."

In 2004, Titan will be visited by NASA's Cassini spacecraft, which will look for clouds on Titan during its multiyear mission around Saturn. "Changes in the spatial distribution of these clouds over the next Titan season will help pin down their detailed formation process," says Imke de Pater, professor of astronomy at UC Berkeley. The Cassini mission includes a probe named Huygens that will descend by parachute into Titan's atmosphere and land on the surface near the edge of the bright continent.

The team conducting the Gemini observations consists of Roe and de Pater from UC Berkeley, Bruce A. Macintosh of Lawrence Livermore National Laboratory, and Christopher P. McKay of the NASA Ames Research Center. The team reporting results from the Keck telescope consists of Brown and Bouchez of Caltech and Caitlin A. Griffith of the University of Arizona.

The Gemini observatory is operated by the Association of Universities for Research in Astronomy under a cooperative agreement with the National Science Foundation, involving NOAO/AURA/NSF as the U.S. partner. The W.M. Keck Observatory is operated by the California Association for Research in Astronomy, a scientific partnership between the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. This research has been funded in part by grants from NSF and NASA.

Contact: Robert Tindol (626) 395-3631


New Theory Accounts for Existence of Binaries in Kuiper Belt

PASADENA, Calif.--In the last few years, researchers have discovered more than 500 objects in the Kuiper belt, a gigantic outer ring in the outskirts of the solar system, beyond the orbit of Neptune. Of these, seven so far have turned out to be binaries--two objects that orbit each other. The surprise is that these binaries all seem to be pairs of widely separated objects of similar size. This is surprising because more familiar pairings, such as the Earth/moon system, tend to be unequal in size and/or rather close together.

To account for these oddities, scientists from the California Institute of Technology have devised a theory of Kuiper belt binary formation. Their work is published in the December 12 issue of the journal Nature.

According to Re'em Sari, a senior research fellow at Caltech, the theory will be tested in the near future as additional observations of Kuiper belt objects are obtained and additional binaries are discovered. The other authors of the paper are Peter Goldreich, DuBridge Professor of Astrophysics and Planetary Physics at Caltech; and Yoram Lithwick, now a postdoc at UC Berkeley.

"The binaries we are more familiar with, like the Earth/moon system, resulted from collisions that ejected material," says Sari. "That material coalesced to form the smaller body. Then the interaction between the spin of the larger body and the orbit of the smaller body caused them to move farther and farther apart."

"This doesn't work for the Kuiper belt binaries," Sari says. "They are too far away from each other to have ever had enough spin for this effect to take place." The members of the seven binaries are about 100 kilometers in radius, but 10,000 to 100,000 kilometers from each other. Thus their separations are 100 to 1,000 times their radii. By contrast, Earth is about 400,000 kilometers from the moon, and about 6,000 kilometers in radius. Even at a distance of 60 times the radius of Earth, the tidal mechanism works only because the moon is so much less massive than Earth.

Sari and his colleagues think the explanation is that the Kuiper belt bodies tend to get closer together as time goes on -- exactly the reverse of the situation with the planets and their satellites, where the separations tend to increase. "The Earth/moon system evolves 'inside-out', but the Kuiper belt binaries evolved 'outside-in,'" explains Sari.

Individual objects in the Kuiper belt are thought to have formed in the early solar system by accretion of smaller objects. The region where the gravitational influence of a body dominates over the tidal forces of the sun is known as its Hill sphere. For a 100-kilometer body located in the Kuiper belt, this extends to about a million kilometers. Large bodies can accidentally pass through one another's Hill spheres. Such encounters last a couple of centuries and, if no additional process is involved, the "transient binary" dissolves, and the two objects continue on separate orbits around the sun. The transient binary must lose energy to become bound. The researchers estimate that in about 1 in 300 encounters, a third large body would have absorbed some of the energy and left a bound binary. An additional mechanism for energy loss is gravitational interaction with the sea of small bodies from which the large bodies were accreting. This interaction slows down the large bodies. Once in every 30 encounters, they slowed down sufficiently to become bound.

Starting with a binary of large separation a million kilometers apart, continued interaction with the sea of small objects would have led to additional loss of energy, tightening the binary. The time required for the formation of individual objects is sufficient for a binary orbit to shrink all the way to contact. Indeed, the research predicts that most binaries coalesced in this manner or at least became very tight. But if the binary system was formed relatively late, close to the time that accretion in the Kuiper belt ceased, a widely separated binary would survive. These are the objects we observe today. By this mechanism it can be predicted that about 5 percent of objects remain with large enough separation to be observed as a binary. The prediction is in agreement with recent surveys conducted by Caltech associate professor of planetary astronomy Mike Brown. The majority of objects ended up as tighter binaries. Their images cannot be distinguished from those of isolated objects when observed from Earth using existing instruments.

These ideas will be more thoroughly tested as additional objects are discovered and further data is collected. Further theoretical work could predict how the inclination of a binary orbit, relative to the plane of the solar system, evolves as the orbit shrinks. If it increases, this would suggest that the Pluto/Charon system, although tight, was also formed by the 'outside-in' mechanism, since it is known to have large inclination.

Robert Tindol

Earthbound experiment confirms theory accounting for sun's scarcity of neutrinos

PASADENA, Calif.- In the subatomic particle family, the neutrino is a bit like a wayward red-haired stepson. Neutrinos were long ago detected-and even longer ago predicted to exist-but everything physicists know about nuclear processes says there should be a certain number of neutrinos streaming from the sun, yet there are nowhere near enough.

This week, an international team has revealed that the sun's lack of neutrinos is a real phenomenon, probably explainable by conventional theories of quantum mechanics, and not merely an observational quirk or something unknown about the sun's interior. The team, which includes experimental particle physicist Robert McKeown of the California Institute of Technology, bases its observations on experiments involving nuclear power plants in Japan.

The project is referred to as KamLAND because the neutrino detector is located at the Kamioka mine in Japan. Properly shielded from radiation from background and cosmic sources, the detector is optimized for measuring the neutrinos from all 17 nuclear power plants in the country.

Neutrinos are produced in the nuclear fusion process, when two protons fuse together to form deuterium, a positron (in other words, the positively charged antimatter equivalent of an electron), and a neutrino. The deuterium nucleus hangs nearby, while the positron eventually annihilates both itself and an electron. The neutrino, being very unlikely to interact with matter, streams away into space.

Therefore, physicists would normally expect neutrinos to flow from the sun in much the same way that photons flow from a light bulb. In the case of the light bulb, the photons (or bundles of light energy) are thrown out radially and evenly, as if the surface of a surrounding sphere were being illuminated. And because the surface area of a sphere increases by the square of the distance, an observer standing 20 feet away sees only one-fourth the photons of an observer standing at 10 feet.

Thus, observers on Earth expect to see a given number of neutrinos coming from the sun-assuming they know how many nuclear reactions are going on in the sun-just as they expect to know the luminosity of a light bulb at a given distance if they know the bulb's wattage. But such has not been the case. Carefully constructed experiments for detecting the elusive neutrinos have shown that there are far fewer neutrinos than there should be.

A theoretical explanation for this neutrino deficit is that the neutrino "flavor" oscillates between the detectable "electron" neutrino type, and the much heavier "muon" neutrino and maybe even the "tau" neutrino, neither of which can be detected. Utilizing quantum mechanics, physicists estimate that the number of detectable electron neutrinos is constantly changing in a steady rhythm from 100 percent down to a small percentage and back again.

Therefore, the theory says that the reason we see only about half as many neutrinos from the sun as we should be seeing is because, outside the sun, about half the electron neutrinos are at that moment one of the undetectable flavors.

The triumph of the KamLAND experiment is that physicists for the first time can observe neutrino oscillations without making assumptions about the properties of the source of neutrinos. Because the nuclear power plants have a very precisely known amount of material generating the particles, it is much easier to determine with certainty whether the oscillations are real or not.

Actually, the fission process of the nuclear plants is different from the process in the sun in that the nuclear material breaks apart to form two smaller atoms, plus an electron and an antineutrino (the antimatter equivalent of a neutrino). But matter and antimatter are thought to be mirror-images of each other, so the study of antineutrinos from the beta-decays of the nuclear power plants should be exactly the same as a study of neutrinos.

"This is really a clear demonstration of neutrino disappearance," says McKeown. "Granted, the laboratory is pretty big-it's Japan-but at least the experiment doesn't require the observer to puzzle over the composition of astrophysical sources.

"Willy Fowler [the late Nobel Prize-winning Caltech physicist] always said it's better to know the physics to explain the astrophysics, rather than vice versa," McKeown says. "This experiment allows us to study the neutrino in a controlled experiment."

The results announced this week are taken from 145 days of data. The researchers detected 54 events during that time (an event being a collision of an antineutrino with a proton to form a neutron and positron, ultimately resulting in a flash of light that could be measured with photon detectors). Theory predicted that about 87 antineutrinos would have been seen during that time, if no oscillations occurred, but 54 events at an average distance of 175 kilometers if the oscillation is a real phenomenon.

According to McKeown, the experiment will run about three to five years, with experimentalists ultimately collecting data for several hundred events. The additional information should provide very accurate measurements of the energy spectrum predicted by theory when the neutrinos oscillate.

The experiment may also catch neutrinos if any supernovae occur in our galaxy, as well as neutrinos from natural events in Earth's interior.

In addition to McKeown's team at Caltech's Kellogg Radiation Lab, other partners in the study include the Research Center for Neutrino Science at Tohuku University in Japan, the University of Alabama, the University of California at Berkeley and the Lawrence Berkeley National Laboratory, Drexel University, the University of Hawaii, the University of New Mexico, Louisiana State University, Stanford University, the University of Tennessee, Triangle Universities Nuclear Laboratory, and the Institute of High Energy Physics in Beijing.

The project is supported in part by the U.S. Department of Energy.



Jill Perry


Subscribe to RSS - research_news