The Martian polar caps are almost entirelywater ice, Caltech research shows

For future Martian astronauts, finding a plentiful water supply may be as simple as grabbing an ice pick and getting to work. California Institute of Technology planetary scientists studying new satellite imagery think that the Martian polar ice caps are made almost entirely of water ice—with just a smattering of frozen carbon dioxide, or "dry ice," at the surface.

Reporting in the February 14 issue of the journal Science, Caltech planetary science professor Andy Ingersoll and his graduate student, Shane Byrne, present evidence that the decades-old model of the polar caps being made of dry ice is in error. The model dates back to 1966, when the first Mars spacecraft determined that the Martian atmosphere was largely carbon dioxide.

Scientists at the time argued that the ice caps themselves were solid dry ice and that the caps regulate the atmospheric pressure by evaporation and condensation. Later observations by the Viking spacecraft showed that the north polar cap contained water ice underneath its dry ice covering, but experts continued to believe that the south polar cap was made of dry ice.

However, recent high-resolution and thermal images from the Mars Global Surveyor and Mars Odyssey, respectively, show that the old model could not be accurate. The high-resolution images show flat-floored, circular pits eight meters deep and 200 to 1,000 meters in diameter at the south polar cap, and an outward growth rate of about one to three meters per year. Further, new infrared measurements from the newly arrived Mars Odyssey show that the lower material heats up, as water ice is expected to do in the Martian summer, and that the polar cap is too warm to be dry ice.

Based on this evidence, Byrne (the lead author) and Ingersoll conclude that the pitted layer is dry ice, but the material below, which makes up the floors of the pits and the bulk of the polar cap, is water ice.

This shows that the south polar cap is actually similar to the north pole, which was determined, on the basis of Viking data, to lose its one-meter covering of dry ice each summer, exposing the water ice underneath. The new results show that the difference between the two poles is that the south pole dry-ice cover is slightly thicker—about eight meters—and does not disappear entirely during the summertime.

Although the results show that future astronauts may not be obliged to haul their own water to the Red Planet, the news is paradoxically negative for the visionary plans often voiced for "terraforming" Mars in the distant future, Ingersoll says.

"Mars has all these flood and river channels, so one theory is that the planet was once warm and wet," Ingersoll says, explaining that a large amount of carbon dioxide in the atmosphere is thought to be the logical way to have a "greenhouse effect" that captures enough solar energy for liquid water to exist.

"If you wanted to make Mars warm and wet again, you'd need carbon dioxide, but there isn't nearly enough if the polar caps are made of water," Ingersoll adds. "Of course, terraforming Mars is wild stuff and is way in the future; but even then, there's the question of whether you'd have more than a tiny fraction of the carbon dioxide you'd need."

This is because the total mass of dry ice is only a few percent of the atmosphere's mass and thus is a poor regulator of atmospheric pressure, since it gets "used up" during warmer climates. For example, when Mars's spin axis is tipped closer to its orbit plane, which is analogous to a warm interglacial period on Earth, the dry ice evaporates entirely, but the atmospheric pressure remains almost unchanged.

The findings present a new scientific mystery to those who thought they had a good idea of how the atmospheres of the inner planets compared to each other. Planetary scientists have assumed that Earth, Venus, and Mars are similar in the total carbon dioxide content, with Earth having most of its carbon dioxide locked up in marine carbonates and Venus's carbon dioxide being in the atmosphere and causing the runaway greenhouse effect. By contrast, the eight-meter layer on the south polar ice cap on Mars means the planet has only a small fraction of the carbon dioxide found on Earth and Venus.

The new findings further pose the question of how Mars could have been warm and wet to begin with. Working backward, one would assume that there was once a sufficient amount of carbon dioxide in the atmosphere to trap enough solar energy to warm the planet, but there's simply not enough carbon dioxide for this to clearly have been the case.

"There could be other explanations," Byrne says. "It could be that Mars was a cold, wet planet; or it could be that the subterranean plumbing would allow for liquid water to be sealed off underneath the surface."

In one such scenario, perhaps the water flowed underneath a layer of ice and formed the channels and other erosion features. Then, perhaps, the ice sublimated away, to be eventually redeposited at the poles.

At any rate, Ingersoll and Byrne say that finding the missing carbon dioxide, or accounting for its absence, is now a major goal of Mars research.

Contact: Robert Tindol (626) 395-3631

 

Writer: 
RT

Caltech, Italian Scientists Find Human Longevity Marker

"A very short one." Oldest known living person in 1995, Jeanne Calment, of France, then 120, when asked what sort of future she anticipated having. Quoted in Newsweek magazine, March 6, 1995.

PASADENA, Calif. – Even though Jeanne Louise Calment died in 1997 at the age of 122, we envy her longevity. Better, perhaps, to envy her mother's lineage, suggest scientists at the California Institute of Technology.

In a study of nonrelated people who have lived for a century or more, the researchers found that the centenarians had something in common: each was five times more likely than the general population to have the same mutation in their mitochondrial DNA (mtDNA).

That mutation, the researchers suggest, may provide a survival advantage by speeding mtDNA replication, thereby increasing its amount or replacing that portion of mtDNA which has been battered by the ravages of aging

The study was conducted by Jin Zhang, Jordi Asin Cayuela, and Yuichi Michikawa, postdoctoral scholars; Jennifer Fish, a research scientist; and Giuseppe Attardi, the Grace C. Steele Professor of Molecular Biology, all at Caltech, along with colleagues from the Universities of Bologna and Calabria in Italy, and the Italian National Research Center on Aging. It appears in the February 4 issue of the Proceedings of the National Academy of Sciences, and online at the PNAS website (http://www.pnas.org/).

Mitochondrial DNA is the portion of the cell DNA that is located in mitochondria, the organelles which are the "powerhouses" of the cell. These organelles capture the energy released from the oxidation of metabolites and convert it into ATP, the energy currency of the cell. Mitochondrial DNA passes only from mother to offspring. Every human cell contains hundreds, or, more often, thousands of mtDNA molecules.

It's known that mtDNA has a high mutation rate. Such mutations can be harmful, beneficial, or neutral. In 1999, Attardi and other colleagues found what Attardi described as a "clear trend" in mtDNA mutations in individuals over the age of 65. In fact, in the skin cells the researchers examined, they found that up to 50 percent of the mtDNA molecules had been mutated.

Then, in another study two years ago, Attardi and colleagues found four centenarians who shared a genetic change in the so-called main control region of mtDNA. Because this region controls DNA replication, that observation raised the possibility that some mutations may extend life.

Now, by analyzing mtDNA isolated from a group of Italian centenarians, the researchers have found a common mutation in the same main control region. Looking at mtDNA in white blood cells of a group of 52 Italians between the ages of 99 and 106, they found that 17 percent had a specific mutation called the C150T transition. That frequency compares to only 3.4 percent of 117 people under the age of 99 who shared the same C150T mutation.

To probe whether the mutation is inherited, the team studied skin cells collected from the same individuals between 9 and 19 years apart. In some, both samples showed that the mutation already existed, while in others, it either appeared or became more abundant during the intervening years. These results suggest that some people inherit the mutation from their mother, while others acquire it during their lifetime.

Confirmation that the C150T mutation can be inherited was obtained by looking at mtDNA samples from 20 monozygotic (that is, derived from a single egg) twins and 18 dizygotic (from separate eggs) twins between 60 and 75 years of age. To their surprise, the investigators found that 30 percent of the monozygotic twins and 22 percent of the dizygotic twins shared the C150T mutation.

"The selection of the C150T mutation in centenarians suggests that it may promote survival," says Attardi. "Similarly, it may protect twins early in life from the effects of fetal growth restriction and the increased mortality associated with twin births.

"We found the mutation shifts the site at which mtDNA starts to replicate, and perhaps that may accelerate its replication, possibly, allowing the lucky individual to replace damaged molecules faster." Attardi says the study is the first to show a robust difference in an identified genetic marker between centenarians and younger folks. Their next goal, he says, is to find the exact physiological effect of this particular mutation.

The researchers who contributed to the paper in Italy were Massimiliano Bonafe, Fabiola Olivieri, Giuseppe Passarino, Giovanna De Benedictis, and Claudio Franceschi.

Contact: Mark Wheeler (626) 395-8733 wheel@caltech.edu

Visit the Caltech Media Relations Website at http://pr.caltech.edu/media

###

Writer: 
MW
Writer: 

Nanodevice breaks 1-GHz barrier

Nanoscientists have achieved a milestone in their burgeoning field by creating a device that vibrates a billion times per second, or at one gigahertz (1 GHz). The accomplishment further increases the likelihood that tiny mechanical devices working at the quantum level can someday supplement electronic devices for new products.

Reporting in the January 30 issue of the journal Nature, California Institute of Technology professor of physics, applied physics, and bioengineering Michael Roukes and his colleagues from Caltech and Case Western Reserve University demonstrate that the tiny mechanism operates at microwave frequencies. The device is a prototype and not yet developed to the point that it is ready to be integrated into a commercial application; nevertheless, it demonstrates the progress being made in the quest to turn nanotechnology into a reality—that is, to make useful devices whose dimensions are less than a millionth of a meter.

This latest effort in the field of NEMS, which is an acronym for "nanoelectromechanical systems," is part of a larger, emerging effort to produce mechanical devices for sensitive force detection and high-frequency signal processing. According to Roukes, the technology could also have implications for new and improved biological imaging and, ultimately, for observing individual molecules through an improved approach to magnetic resonance spectroscopy, as well as for a new form of mass spectrometry that may permit single molecules to be "fingerprinted" by their mass.

"When we think of microelectronics today, we think about moving charges around on chips," says Roukes. "We can do this at high rates of speed, but in this electronic age our mind-set has been somewhat tyrannized in that we typically think of electronic devices as involving only the movement of charge.

"But since 1992, we've been trying to push mechanical devices to ever-smaller dimensions, because as you make things smaller, there's less inertia in getting them to move. So the time scales for inducing mechanical response go way down."

Though a good home computer these days can have a speed of one gigahertz or more, the quest to construct a mechanical device that can operate at such speeds has required multiple breakthroughs in manufacturing technology. In the case of the Roukes group's new demonstration, the use of silicon carbide epilayers to control layer thickness to atomic dimensions and a balanced high-frequency technique for sensing motion that effectively transfers signals to macroscale circuitry have been crucial to success. Both advances were pioneered in the Roukes lab.

Grown on silicon wafers, the films used in the work are prepared in such a way that the end-products are two nearly-identical beams 1.1 microns long, 120 nanometers wide and 75 nanometers thick. When driven by a microwave-frequency electric current while exposed to a strong magnetic field, the beams mechanically vibrate at slightly more than one gigahertz.

Future work will include improving the nanodevices to better link their mechanical function to real-world applications, Roukes says. The issue of communicating information, or measurements, from the nanoworld to the everyday world we live in is by no means a trivial matter. As devices become smaller, it becomes increasingly difficult to recognize the very small displacements that occur at much shorter time-scales.

Progress with the nanoelectromechanical system working at microwave frequencies offer the potential for improving magnetic resonance imaging to the extent that individual macromolecules could be imaged. This would be especially important in furthering the understanding of the relationship between, for example, the structure and function of proteins. Also, the devices could be used in a novel form of mass spectrometry, and for sensing individual biomolecules in fluids, and perhaps for realizing solid-state manifestations of the quantum bit that could be exploited for future devices such as quantum computers.

The coauthors of the paper are Xue-Ming (Henry) Huang, a graduate student in physics at Caltech; and Chris Zorman and Mehran Mehrengany, both engineering professors at Case Western Reserve University.

Contact:Robert Tindol (626) 395-3631

Writer: 
RT

Research shows that shear force of blood flowis crucial to embryonic heart development

In a triumph of bioengineering, an interdisciplinary team of California Institute of Technology researchers has imaged the blood flow inside the heart of a growing embryonic zebrafish. The results demonstrate for the first time that the very action of high-velocity blood flowing over cardiac tissue is an important factor in the proper development of the heart—a result that could have profound implications for future surgical techniques and even for genetic engineering.

In the January 9, 2003 issue of the journal Nature, the investigators report on two interrelated advances in their work on Danio rerio, an animal reaching only two inches in length as an adult but a model of choice for research in genetic and developmental biology. First, the team was able to get very-high-resolution motion video, through the use of confocal microscopy, of the tiny beating hearts that are less than the diameter of a human hair. Second, by surgically blocking the flow of blood through the hearts, the researchers were able to demonstrate that a reduction in "shear stress," or the friction imposed by a flowing fluid on adjacent cells, will cause the growing heart to develop abnormally.

The result is especially important, says co-lead author Jay Hove, because it shows that more detailed studies of the effect of shear force might be exploited in the treatment of human heart disease. Because diseases such as congestive heart failure are known to cause the heart to enlarge due to constricted blood flow, a better understanding of the precise mechanisms of the blood flow could perhaps lead to advanced treatments to counteract the enlargement.

Also, Hove says, a better understanding of genetic factors involving blood flow in the heart—a future goal of the team's research—could eventually be exploited in the diagnosis of prenatal heart disease for early surgical correction, or even genetic intervention.

Hove, a bioengineer, along with Liepmann Professor of Aeronautics and Bioengineering Morteza Gharib, teamed with Scott Fraser, who is Rosen Professor of Biology, and Reinhardt Köster, a postdoctoral scholar in Fraser's lab, to study the heart development of zebrafish. Gharib, a specialist on fluid flow, has worked on heart circulation in the past, and Fraser is a leading authority on the imaging of cellular development in embryos. The new results are thus an interdisciplinary marriage of the fields of engineering, biology, and optics.

"Our research shows that the shape of the heart can be changed during the embryonic stage," says Hove. "The results invite us to consider whether this can be related to the roots of heart failure and heart disease."

The researchers keyed their efforts on the zebrafish because the one-millimeter eggs and the embryos inside them are nearly transparent. With the addition of a special chemical to further block the formation of pigment, the team was able to perform a noninvasive, in vivo "optical dissection." To do this, they used a technique known as confocal microscopy, which allows imaging of a layer of tissue. The images are two-dimensional, but they can be "stacked" for a three-dimensional reconstruction.

Concentrating on two groups of embryos—one group 36 hours after fertilization and the other at about four days—the researchers discovered that their deliberate interference with the blood flow through the use of carefully placed beads had a profound effect on heart development. When the shear force was reduced by 90 percent, the tiny hearts did not form valves properly, nor did they "loop," or form an outflow track properly.

Because the early development of an embryonic heart is thought to proceed through several nearly identical stages for all vertebrates, the researchers say the effect should also hold true for human embryos. In effect, the research demonstrates that the shear force should also be a fundamental influence on the formation of the various structures of the human heart.

The next step for the researchers is to attempt to regulate the restriction of shear force through new techniques to see how slight variations affect structural development, and to look at how gene expression is involved in embryonic heart development. " What we learn will give us directions to go and questions to ask about other vertebrates, particularly human beings," Hove says.

In addition to the lead authors Hove and Köster and professors Gharib and Fraser, the team also included Caltech students Arian S. Forouhar and Gabriel Acevedo-Bolton.

The paper is available on the Nature Web site at http://www.nature.com/nature/links/030109/030109-1.html

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Caltech, UCLA Researchers Create a New Gene Therapy for Treatment of HIV

PASADENA, Calif.— California Institute of Technology and UCLA researchers have developed a new gene therapy that is highly effective in preventing the HIV virus from infecting individual cells in the immune system. The technique, while not curative, could be used as a significant new treatment for people already infected by reducing the HIV-infected cells in their bodies.

Also, the new approach could be used to fight other diseases resulting from gene malfunctions, including cancer.

Reporting in the current issue of the Proceeding s of the National Academy of Sciences (PNAS), Caltech biologist David Baltimore and his UCLA collaborators announce that the new technique works by using a disabled version of the AIDS virus as a sort of "Trojan horse" to get a disruptive agent inside the human T-cells, thereby reducing the likelihood that a potent HIV virus will be able to successfully invade the cell. Early laboratory results show that more than 80 percent of the T-cells may be protected.

"To penetrate a cell, HIV needs two receptors that operate like doorknobs and allow the virus inside," says Baltimore, who is president of Caltech. "HIV grabs the receptor and forces itself into the cell. If we can knock out one of these receptors, we hope to prevent HIV from infecting the cell."

The receptors in question are called the CCR5 and the CD4. The human immune system can't get along without the CD4, but about 1 percent of the Caucasian population is born without the CCR5. In fact, these people are known to have a natural immunity to AIDS.

Therefore, the researchers' strategy was to disrupt the CCR5 receptor. They did this by introducing a special double-stranded RNA known as "small interfering RNA," or siRNA, into the T-cell. To do so, they engineered a disabled HIV virus to carry the siRNA into the T-cell. Thus, the T-cell was invaded, but the disabled virus has no ability to cause disease. Once inside the T-cell, the siRNA knocks out the CCR5 receptor.

Laboratory results show that human T-cells thus protected are then quite resistant to infection by the HIV virus. When the T-cells were put in a petri dish and exposed to HIV, less than 20 percent of the cells were actually infected.

"Synthetic siRNAs are powerful tools," says Irvin S.Y. Chen, one of the authors of the paper and director of the UCLA AIDS Institute. "But scientists have been baffled at how to insert them into the immune system in stable form. You can't just sprinkle them on the cells."

The other two authors of the paper are Xiao-Feng Qin, a postdoctoral researcher at Caltech; and Dong Sung An, a postdoctoral researcher at UCLA. The two contributed equally to the work.

The technique should become a significant new means of treating people already infected with HIV, Baltimore and Chen say.

"Our findings raise the hope that we can use this approach or combine it with drugs to treat HIV in people—particularly in persons who have not experienced good results with other forms of treatment," says Baltimore.

The technique can also potentially be used for other diseases when a specific gene needs to be knocked out, such as the malfunctioning genes associated with cancer, Chen says. "We can easily make siRNAs and use the carrier to deliver them into different cell types to turn off a gene malfunction," he says.

In addition, the technique could be used to prevent certain microorganisms from invading the body, Baltimore adds.

The research is supported by the National Institute of Allergy and Infectious Diseases and the Damon Runyon-Walter Winchell Fellowship.

[Note to editors: UCLA is also issuing a news release on this research. Contact Elaine Schmidt at (310) 794-2272; elaines@support.ucla.edu]

Writer: 
Robert Tindol
Writer: 

Clouds discovered on Saturn's moon Titan

Teams of astronomers at the California Institute of Technology and at the University of California, Berkeley, have discovered methane clouds near the south pole of Titan, resolving a fierce debate about whether clouds exist amid the haze of the moon's atmosphere.

The new observations were made using the W. M. Keck II 10-meter and the Gemini North 8-meter telescopes atop Hawaii's Mauna Kea volcano in December 2001. Both telescopes are outfitted with adaptive optics that provide unprecedented detail of features not seen even by the Voyager spacecraft during its flyby of Saturn and Titan.

The results are being published by the Caltech team in the December 19 issue of Nature and by the UC Berkeley and NASA Ames team in the December 20 issue of the Astrophysical Journal.

Titan is Saturn's largest moon, larger than the planet Mercury, and is the only moon in our solar system with a thick atmosphere. Like Earth's atmosphere, the atmosphere on Titan is mostly nitrogen. Unlike Earth, Titan is inhospitable to life due to the lack of atmospheric oxygen and its extremely cold surface temperatures (-183 degrees Celsius, or -297 degrees Fahrenheit). Along with nitrogen, Titan's atmosphere contains a significant amount of methane.

Earlier spectroscopic observations hinted at the existence of clouds on Titan, but gave no clue as to their location. These early data were hotly debated, since Voyager spacecraft measurements of Titan appeared to show a calm and cloud-free atmosphere. Furthermore, previous images of Titan had failed to reveal clouds, finding only unchanging surface markings and very gradual seasonal changes in the haziness of the atmosphere.

Improvements in the resolution and sensitivity achievable with ground-based telescopes led to the present discovery. The observations used adaptive optics, in which a flexible mirror rapidly compensates for the distortions caused by turbulence in Earth's atmosphere. These distortions are what cause the well-known twinkling of the stars. Using adaptive optics, details as small as 300 kilometers across can be distinguished at the enormous distance of Titan (1.3 billion kilometers), equivalent of reading an automobile license plate from 100 kilometers away.

The images presented by the two teams clearly show bright clouds near Titan's south pole.

"We see the intensity of the clouds varying over as little as a few hours," said post-doctoral fellow Henry Roe, lead author for the UC Berkeley group. "The clouds are constantly changing, although some persist for as long as a few days."

Titan experiences seasons much like Earth, though its year is 30 times longer due to Saturn's distant orbit from the sun. Titan is currently in the midst of southern summer, and the south pole has been in continuous sunlight for over six Earth years. The researchers believe that this fact may explain the location of the large clouds.

"These clouds appear to be similar to summer thunderstorms on Earth, but formed of methane rather than water. This is the first time we have found such a close analogy to the Earth's atmospheric water cycle in the solar system," says Antonin Bouchez, one of the Caltech researchers.

In addition to the clouds above Titan's south pole, the Keck images, like previous data, reveal the bright continent-sized feature that may be a large icy highland on Titan's surface, surrounded by linked dark regions that are possibly ethane seas or tar-covered lowlands.

"These are the most spectacular images of Titan's surface which we've seen to date," says Michael Brown, associate professor of planetary astronomy and lead author of the Caltech paper. "They are so detailed that we can almost begin to speculate about Titan's geology, if only we knew for certain what the bright and dark regions represented."

In 2004, Titan will be visited by NASA's Cassini spacecraft, which will look for clouds on Titan during its multiyear mission around Saturn. "Changes in the spatial distribution of these clouds over the next Titan season will help pin down their detailed formation process," says Imke de Pater, professor of astronomy at UC Berkeley. The Cassini mission includes a probe named Huygens that will descend by parachute into Titan's atmosphere and land on the surface near the edge of the bright continent.

The team conducting the Gemini observations consists of Roe and de Pater from UC Berkeley, Bruce A. Macintosh of Lawrence Livermore National Laboratory, and Christopher P. McKay of the NASA Ames Research Center. The team reporting results from the Keck telescope consists of Brown and Bouchez of Caltech and Caitlin A. Griffith of the University of Arizona.

The Gemini observatory is operated by the Association of Universities for Research in Astronomy under a cooperative agreement with the National Science Foundation, involving NOAO/AURA/NSF as the U.S. partner. The W.M. Keck Observatory is operated by the California Association for Research in Astronomy, a scientific partnership between the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. This research has been funded in part by grants from NSF and NASA.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

New Theory Accounts for Existence of Binaries in Kuiper Belt

PASADENA, Calif.--In the last few years, researchers have discovered more than 500 objects in the Kuiper belt, a gigantic outer ring in the outskirts of the solar system, beyond the orbit of Neptune. Of these, seven so far have turned out to be binaries--two objects that orbit each other. The surprise is that these binaries all seem to be pairs of widely separated objects of similar size. This is surprising because more familiar pairings, such as the Earth/moon system, tend to be unequal in size and/or rather close together.

To account for these oddities, scientists from the California Institute of Technology have devised a theory of Kuiper belt binary formation. Their work is published in the December 12 issue of the journal Nature.

According to Re'em Sari, a senior research fellow at Caltech, the theory will be tested in the near future as additional observations of Kuiper belt objects are obtained and additional binaries are discovered. The other authors of the paper are Peter Goldreich, DuBridge Professor of Astrophysics and Planetary Physics at Caltech; and Yoram Lithwick, now a postdoc at UC Berkeley.

"The binaries we are more familiar with, like the Earth/moon system, resulted from collisions that ejected material," says Sari. "That material coalesced to form the smaller body. Then the interaction between the spin of the larger body and the orbit of the smaller body caused them to move farther and farther apart."

"This doesn't work for the Kuiper belt binaries," Sari says. "They are too far away from each other to have ever had enough spin for this effect to take place." The members of the seven binaries are about 100 kilometers in radius, but 10,000 to 100,000 kilometers from each other. Thus their separations are 100 to 1,000 times their radii. By contrast, Earth is about 400,000 kilometers from the moon, and about 6,000 kilometers in radius. Even at a distance of 60 times the radius of Earth, the tidal mechanism works only because the moon is so much less massive than Earth.

Sari and his colleagues think the explanation is that the Kuiper belt bodies tend to get closer together as time goes on -- exactly the reverse of the situation with the planets and their satellites, where the separations tend to increase. "The Earth/moon system evolves 'inside-out', but the Kuiper belt binaries evolved 'outside-in,'" explains Sari.

Individual objects in the Kuiper belt are thought to have formed in the early solar system by accretion of smaller objects. The region where the gravitational influence of a body dominates over the tidal forces of the sun is known as its Hill sphere. For a 100-kilometer body located in the Kuiper belt, this extends to about a million kilometers. Large bodies can accidentally pass through one another's Hill spheres. Such encounters last a couple of centuries and, if no additional process is involved, the "transient binary" dissolves, and the two objects continue on separate orbits around the sun. The transient binary must lose energy to become bound. The researchers estimate that in about 1 in 300 encounters, a third large body would have absorbed some of the energy and left a bound binary. An additional mechanism for energy loss is gravitational interaction with the sea of small bodies from which the large bodies were accreting. This interaction slows down the large bodies. Once in every 30 encounters, they slowed down sufficiently to become bound.

Starting with a binary of large separation a million kilometers apart, continued interaction with the sea of small objects would have led to additional loss of energy, tightening the binary. The time required for the formation of individual objects is sufficient for a binary orbit to shrink all the way to contact. Indeed, the research predicts that most binaries coalesced in this manner or at least became very tight. But if the binary system was formed relatively late, close to the time that accretion in the Kuiper belt ceased, a widely separated binary would survive. These are the objects we observe today. By this mechanism it can be predicted that about 5 percent of objects remain with large enough separation to be observed as a binary. The prediction is in agreement with recent surveys conducted by Caltech associate professor of planetary astronomy Mike Brown. The majority of objects ended up as tighter binaries. Their images cannot be distinguished from those of isolated objects when observed from Earth using existing instruments.

These ideas will be more thoroughly tested as additional objects are discovered and further data is collected. Further theoretical work could predict how the inclination of a binary orbit, relative to the plane of the solar system, evolves as the orbit shrinks. If it increases, this would suggest that the Pluto/Charon system, although tight, was also formed by the 'outside-in' mechanism, since it is known to have large inclination.

Writer: 
Robert Tindol
Writer: 

Earthbound experiment confirms theory accounting for sun's scarcity of neutrinos

PASADENA, Calif.- In the subatomic particle family, the neutrino is a bit like a wayward red-haired stepson. Neutrinos were long ago detected-and even longer ago predicted to exist-but everything physicists know about nuclear processes says there should be a certain number of neutrinos streaming from the sun, yet there are nowhere near enough.

This week, an international team has revealed that the sun's lack of neutrinos is a real phenomenon, probably explainable by conventional theories of quantum mechanics, and not merely an observational quirk or something unknown about the sun's interior. The team, which includes experimental particle physicist Robert McKeown of the California Institute of Technology, bases its observations on experiments involving nuclear power plants in Japan.

The project is referred to as KamLAND because the neutrino detector is located at the Kamioka mine in Japan. Properly shielded from radiation from background and cosmic sources, the detector is optimized for measuring the neutrinos from all 17 nuclear power plants in the country.

Neutrinos are produced in the nuclear fusion process, when two protons fuse together to form deuterium, a positron (in other words, the positively charged antimatter equivalent of an electron), and a neutrino. The deuterium nucleus hangs nearby, while the positron eventually annihilates both itself and an electron. The neutrino, being very unlikely to interact with matter, streams away into space.

Therefore, physicists would normally expect neutrinos to flow from the sun in much the same way that photons flow from a light bulb. In the case of the light bulb, the photons (or bundles of light energy) are thrown out radially and evenly, as if the surface of a surrounding sphere were being illuminated. And because the surface area of a sphere increases by the square of the distance, an observer standing 20 feet away sees only one-fourth the photons of an observer standing at 10 feet.

Thus, observers on Earth expect to see a given number of neutrinos coming from the sun-assuming they know how many nuclear reactions are going on in the sun-just as they expect to know the luminosity of a light bulb at a given distance if they know the bulb's wattage. But such has not been the case. Carefully constructed experiments for detecting the elusive neutrinos have shown that there are far fewer neutrinos than there should be.

A theoretical explanation for this neutrino deficit is that the neutrino "flavor" oscillates between the detectable "electron" neutrino type, and the much heavier "muon" neutrino and maybe even the "tau" neutrino, neither of which can be detected. Utilizing quantum mechanics, physicists estimate that the number of detectable electron neutrinos is constantly changing in a steady rhythm from 100 percent down to a small percentage and back again.

Therefore, the theory says that the reason we see only about half as many neutrinos from the sun as we should be seeing is because, outside the sun, about half the electron neutrinos are at that moment one of the undetectable flavors.

The triumph of the KamLAND experiment is that physicists for the first time can observe neutrino oscillations without making assumptions about the properties of the source of neutrinos. Because the nuclear power plants have a very precisely known amount of material generating the particles, it is much easier to determine with certainty whether the oscillations are real or not.

Actually, the fission process of the nuclear plants is different from the process in the sun in that the nuclear material breaks apart to form two smaller atoms, plus an electron and an antineutrino (the antimatter equivalent of a neutrino). But matter and antimatter are thought to be mirror-images of each other, so the study of antineutrinos from the beta-decays of the nuclear power plants should be exactly the same as a study of neutrinos.

"This is really a clear demonstration of neutrino disappearance," says McKeown. "Granted, the laboratory is pretty big-it's Japan-but at least the experiment doesn't require the observer to puzzle over the composition of astrophysical sources.

"Willy Fowler [the late Nobel Prize-winning Caltech physicist] always said it's better to know the physics to explain the astrophysics, rather than vice versa," McKeown says. "This experiment allows us to study the neutrino in a controlled experiment."

The results announced this week are taken from 145 days of data. The researchers detected 54 events during that time (an event being a collision of an antineutrino with a proton to form a neutron and positron, ultimately resulting in a flash of light that could be measured with photon detectors). Theory predicted that about 87 antineutrinos would have been seen during that time, if no oscillations occurred, but 54 events at an average distance of 175 kilometers if the oscillation is a real phenomenon.

According to McKeown, the experiment will run about three to five years, with experimentalists ultimately collecting data for several hundred events. The additional information should provide very accurate measurements of the energy spectrum predicted by theory when the neutrinos oscillate.

The experiment may also catch neutrinos if any supernovae occur in our galaxy, as well as neutrinos from natural events in Earth's interior.

In addition to McKeown's team at Caltech's Kellogg Radiation Lab, other partners in the study include the Research Center for Neutrino Science at Tohuku University in Japan, the University of Alabama, the University of California at Berkeley and the Lawrence Berkeley National Laboratory, Drexel University, the University of Hawaii, the University of New Mexico, Louisiana State University, Stanford University, the University of Tennessee, Triangle Universities Nuclear Laboratory, and the Institute of High Energy Physics in Beijing.

The project is supported in part by the U.S. Department of Energy.

 

 

Writer: 
Jill Perry
Writer: 

Caltech Professor to Explore Abrupt Climate Changes

PASADENA, Calif.—By analyzing stalagmites from caves in Sarawak, which is the Malaysian section of Borneo and the location of one of the world's oldest rain forests, and by studying deep-sea corals from the North Atlantic Ocean, California Institute of Technology researcher Jess Adkins will explore the vital link between the deep ocean, the atmosphere, and abrupt changes in global climates.

The project, "Linking the Atmosphere and the Deep Ocean during Abrupt Climate Changes," is funded by the Comer Science and Educational Foundation.

Because the Sarawak stalagmites and the deep-sea corals are uranium rich and can be dated precisely, and because they both accumulate continuously, uninterrupted by "bioturbation," the biological process that mixes the upper several centimeters of ocean sediments, they provide unique archives of climate history. By utilizing these archives, Adkins and his research group will be able to chart and link major climate variables, and thereby provide critical insight into understanding rapid climate changes that could impact the earth.

Adkins, an assistant professor of geochemistry and global environmental science, joined Caltech in 2000. He received his PhD in 1998 from the Massachusetts Institute of Technology Woods Hole Oceanographic Institute.

The Comer Science and Education Foundation was established to promote education and discovery through scientific exploration.

Contact: Deborah Williams-Hedges (626) 395-3227 debwms@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media

###

Writer: 
DWH
Writer: 

New study describes workings of deep oceanduring the Last Glacial Maximum

Scientists know quite a bit about surface conditions during the Last Glacial Maximum (LGM), a period that peaked about 18,000 years ago, when ice covered significant portions of Canada and northern Europe.

But to really understand the mechanisms involved in climate change, scientists need to have detailed knowledge of the interaction between the ocean and the atmosphere. And until now, a key component of that knowledge has been lacking for the LGM because of limited understanding of the glacial deep ocean.

In a paper published in the November 29 issue of the journal Science, researchers from the California Institute of Technology and Harvard University report the first measurements for the temperature-salinity distribution of the glacial deep ocean. The results show unexpectedly that the basic mechanism of the distribution was different during icy times.

"You can think of the global ocean as a big bathtub, with the densest water at bottom and the lightest at top," explains Jess Adkins, an assistant professor of geochemistry and global environmental science at Caltech and lead author of the paper. Because water that is cold or salty--or both--is dense, it tends to flow downward in a vertical circulation pattern, much like water falling down the sides of the bathtub, until it finds its correct density level. In the ocean today, this circulation mechanism tends to be dominated by the temperature of the water.

In studying chlorine data from four ocean drilling program sites, the researchers found that the glacial deep ocean's circulation was set by the salinity of the water. In addition, a person walking on the ocean bottom from north to south, 18,000 years ago, would have found that the water tended to get saltier as he proceeded (within an acceptable margin of error, both north and south waters were the same temperature). Taking that into account, the water in the north would have been less dense. The exact reverse is true today, with the waters at low southern latitudes being very cold and relatively fresh, while those in the high northern latitudes being warmer and saltier.

Adkins says there is a good explanation for the change. The seawater "equation of state" dictates that the density of water near the freezing point is about two-to-three times more sensitive to changes in salinity relative to changes in temperature, as compared to today's warmer deep waters.

So, the equation demands that the density-layering of the ocean "bathtub" be set by the water's salt content at the last glacial maximum. Temperature is still crucial, in that colder waters are more sensitive to salinity changes than warmer water, but Adkin's results show that the deep water circulation mechanism must have operated in a fundamentally different manner in the past.

"This observation of the deep ocean seems like a strange place to go to study Earth's climate, but this is where you find most of the mass and thermal inertia of the climate system," Adkins says.

The ocean's water temperature enters into the complex mechanism affecting the climate, with water moving about in order for the ocean to equalize its temperature. Too, the water and air interact to further complicate the weather equation.

Thus, the results from the glacial deep ocean shows that the climate in those days was operating in a very different way, Adkins says. "Basically, the purpose of this study is to understand the mechanisms of climate change."

In addition to Adkins, the other authors are Katherine McIntyre, a postdoctoral scholar in geochemistry at Caltech; and Daniel P. Schrag of the Department of Earth and Planetary Sciences at Harvard University.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Pages

Subscribe to RSS - research_news