Earth's water probably didn't come from comets, Caltech researchers say

PASADENA—A new Caltech study of comet Hale-Bopp suggests that comets did not give Earth its water, buttressing other recent studies but contrary to the longstanding belief of many planetary scientists.

In the March 18 issue of Nature, cosmochemist Geoff Blake and his team show that Hale-Bopp contains sizable amounts of "heavy water," which contains a heavier isotope of hydrogen called deuterium.

Thus, if Hale-Bopp is a typical comet, and if comets indeed gave Earth its water supply billions of years ago, then the oceans should have roughly the same amount of deuterium as comets. In fact, the oceans have significantly less.

"An important question has been whether comets provided most of the water in Earth's oceans," says Blake, professor of cosmochemistry and planetary science at Caltech. "From the lunar cratering record, we know that, shortly after they were made, both the moon and Earth were bombarded by large numbers of asteroids or comets.

"Did one or the other dominate?"

The answer lies in the Blake team's measurement of a form of heavy water called HDO, which can be measured both in Earth's oceans using mass spectrometers and in comets with Caltech's Owens Valley Radio Observatory (OVRO) Millimeter Array. Just as radio waves go through clouds, millimeter waves easily penetrate the coma of a comet.

This is where cosmochemists can get a view of the makings of the comet billions of years ago, before the sun had even coalesced from an interstellar cloud. In fact, the millimeter-wave study of deuterium in water and in organic molecules in the jets emitted from the surface of the nucleus shows that Hale-Bopp is composed of 15 to 40 percent primordial material that existed before the sun formed.

The jets are quite small in extent, so the image clarity provided by the OVRO Millimeter Array was crucial in the current study. "Hale-Bopp came along at just the right time for our work," Blake says. "We didn't have all six telescopes in the array when Halley's comet passed by, and Hyakutake was a very small comet. Hale-Bopp was quite large, and so it was the first comet that could be imaged at high spatial and spectral resolution at millimeter wavelengths."

One other question that the current study indirectly addresses is the possibility that comets supplied Earth with the organic materials that contributed to the origin of life. While the study does not resolve the issue, neither does it eliminate the possibility.

Also involved in the Nature study are Charlie Qi, a graduate student in planetary science at Caltech; Michiel Hogerheijde of the UC Berkeley department of astronomy; Mark Gurwell of the Harvard-Smithsonian Center for Astrophysics, and Duane Muhleman, professor emeritus of planetary science at Caltech.

Writer: 
Robert Tindol
Writer: 

Caltech discovers genetic process for controlling plant characteristics

PASADENA-Caltech biologists have harnessed a gene communication network that controls the size and shape of a flowering land plant.

The discovery is a fundamental advancement in understanding the processes that make plants what they are. The knowledge could also lead to greater control over certain characteristics of plants such as fruit size and stem durability.

In the March 19 issue of the journal Science, Professor of Biology Elliot Meyerowitz and his colleagues explain how they have managed to control three genes found in the "shoot apical meristem." This structure is the source of all cells creating a plant's leaves, stems, and flowers, and is somewhat analogous to the stem cells in animals.

The shoot apical meristem-also known as SAM-begins as a portion of the seed comprising just a few hundred cells. Like stem cells, they are undifferentiated at first, but as the young organism develops, they diversify to create the cells that make up all the recognizable features. "These divide in highly specific patterns to make leaves and stems and flowers," says Meyerowitz, who specializes in the molecular biology of plants. "Everything you see above ground arises from these cells."

Working with the nondescript flowering plant known as Arabidopsis thaliana, the Meyerowitz team first cloned the genes that gave appearance to the plant. These genes, known as CLV1 and CLV3, turned out to reveal a communication network that the plant uses to make its various parts.

Meyerowitz and his team discovered that the Arabidopsis plant tends to grow differently when the genes are disrupted. For example, the normal plant is about six inches in height with a thin, fragile stem and a few white flowers at the top.

But when the genes are knocked out, the plant grows a much thicker stem and mutant flowers with extra organs of all types, especially stamens and carpels.

In effect, this means that the researchers are in control of the genetic mechanism that governs various characteristics of a plant. And since the effect is genetic, the mutated characteristics are passed along to future generations.

Meyerowitz says the discovery could be used to mutate certain plants of human benefit so that they would have more favorable traits. For example, wheat might be altered so that the stem would be stouter and more resistant to being blown over.

But many of these effects have been accomplished for centuries with selective breeding, he says.

"The difference between a cherry tomato and a big beefsteak tomato is just like the difference between a normal Arabidopsis plant and those mutant for CLV1 or CLV3," he says. "We're not sure if it's exactly the same gene because we haven't yet looked.

"So there are ways to make fruit bigger, for example, without understanding the process," he says. "But what we're trying to do is understand the process."

Also involved in the research are Jennifer Fletcher, a research fellow in biology at Caltech; Mark Running, a graduate of Caltech who is now at UC Berkeley; Rüdiger Simon of the Institut für Entwicklungsbiologie in Cologne, Germany; and Ulrike Brand, a grad student in Simon's lab.

Writer: 
Robert Tindol
Writer: 

Caltech Question of the Month: What do the laws of physics, and the Heisenberg uncertainty principle in particular, say about whether free will exists?

Submitted by Robert R. Belliveau.

Answered by John Preskill, professor of theoretical physics, Caltech.

This is a deep question and there is no simple answer. I am not a philosopher; nor can I speak for all physicists. I can only state my personal views.

The question of free will implicitly relates to the issue of consciousness. Free will usually means the ability of conscious beings to influence their own future behavior. Its existence would seem to imply that different physical laws govern conscious systems and inanimate systems. I know of no persuasive evidence to support this viewpoint, and so I am inclined to reject it. It seems likely to me that it is possible in principle to predict the behavior of a person in the same sense that we can predict the behavior of an electron; it is just tremendously more difficult in practice.

That said, I feel that it would be too facile to completely dismiss the concept of free will. As the questioner rightly indicates, the deterministic worldview spawned by Newtonian physics has been overturned by quantum mechanics. Even in the case of a simple electron, I can have "complete" knowledge of the state of the electron, and yet I am still unable to predict with certainty where the electron will be found the next time I record its position. So it is with the universe. Even if I knew "everything" that could possibly be known about the universe a moment after the Big Bang, I could not predict everything about today's universe; the details hinge upon the random outcomes of countless tosses of the quantum dice. And so it is with a person.

But randomness is certainly not the same thing as free will. The illusion of free will (if it is an illusion) is sufficiently pervasive that I cherish my own ability to make decisions, while I certainly would not value my "ability" to make random choices! Free will is more than a limitation on predictability; it is the notion that "effects" can be "caused" by conscious beings.

Some scientists hope that a deeper grasp of the concept of free will might emerge from a more complete understanding of quantum reality. An eloquent appraisal of these issues can be found in the recent book The Fabric of Reality by David Deutsch. It's not an easy book, but then it's not an easy question!

Writer: 
RT

New electron states observed by Caltech physicists

PASADENA—Caltech physicists have succeeded in forcing electrons to flow in an unusual way never previously observed in nature or in the lab.

According to James Eisenstein, professor of physics, he and his collaborators have observed electrons that, when confined to a two-dimensional plane and subjected to an intense magnetic field, can apparently tell the difference between "north-south" and "east-west" directions in their otherwise featureless environment. As such, the electrons are in a state very different from that of conventional isotropic solids, liquids, and gases.

"Electrons do bizarre and wonderful things in a magnetic field," says Eisenstein, explaining that electrons are elementary particles that naturally repel each other unless forced together.

By trapping billions of electrons on a flat surface within a semiconductor crystal wafer—and thus limiting them to two dimensions—Eisenstein's team is able to study what the electrons do at temperatures close to absolute zero and in the presence of large perpendicular magnetic fields.

Research on exotic states of electrons is relatively new, but its theoretical history goes back to the 1930s, when Eugene Wigner speculated that electrons in certain circumstances could actually form a sort of crystallized solid. It turns out that forcing electrons to lie in a two-dimensional plane increases the chances for such exotic configurations.

"They cannot get out of one another's way into the third dimension, and this actually increases the likelihood of unusual 'correlated' phases," Eisenstein says. Adding a magnetic field has a similar effect by forcing the electrons to move in tiny circular orbits rather than running unimpeded across the plane.

One of the best examples of the strange behavior of two-dimensional electron systems is the fractional quantum Hall effect, for which three American scientists won the Nobel Prize in physics last year. Electrons in such a system are essentially a liquid, and since the quantum effects of the subatomic world become a factor at such scales, the entire group takes on some unusual electrical properties.

Eisenstein's new findings are very different than the fractional quantum Hall effect. Most importantly, his group has found that a current sent one way through the flat plane of electrons tends to encounter much greater resistance than an equal current sent at a perpendicular angle. Normally, one would expect all the electrons to more or less disperse evenly across the flat plane, which would mean the same resistance for a current flowing at varying angles.

Dramatically, this "anisotropy" only sets in when the temperature of the electrons is reduced to within one-tenth of one degree above absolute zero, the lowest temperature a system can attain.

Owing to the laws of quantum mechanics, the circular orbits of the electrons exist only at discrete energies, called Landau levels. For the fractional quantum Hall effect, all of the electrons are in the lowest such level. Eisenstein's new results appear when the higher energy levels are also populated with electrons. While it appears that a minimum of three levels must be occupied, Eisenstein has seen the effects in many higher Landau levels.

"This generic aspect makes the new findings all the more important," comments Eisenstein.

One scheme that might explain the new results is that the electrons are accumulated into long ribbons. Physically, the system would somewhat resemble lines of billiard balls lying in parallel rows on a pool table. If this is what is happening, the Coulomb repulsion of the electrons is overwhelmed within the ribbons so that the electrons can cram more closely together, while in the spaces between the ribbons the number of electrons is reduced.

"There's not a good theoretical understanding of what's going on," Eisenstein says. "Some think such a 'charge-density wave' is at the heart; others think a more appropriate analogy might be the liquid crystal displays in a digital watch."

Another interesting question that could have deep underpinnings is how and why the system "chooses" its particular alignments. The alignment could have to do with the crystal substrate in the wafer, but Eisenstein says this is not clear.

Eisenstein and his collaborators are proceeding with their work, and have recently published results in the January 11 issue of the journal Physical Review Letters.

Heavily involved in the work are Mike Lilly, a Caltech postdoctoral scholar; and Ken Cooper, a Caltech graduate student in physics. Loren Pfeiffer and Ken West—both of Bell Laboratories, Lucent Technologies in Murray Hill, New Jersey—contribute the essential high-purity semiconductor wafers used in the experiments.

Writer: 
Robert Tindol
Writer: 

Caltech Question of the Month: If a lightbulb were one light-year away, how many watts would it have to be for us the see it with the naked eye?

Submitted by R. Anderson of Pomona, California, and answered by Dr. George Djorgovski, Professor of Astronomy

Star brightness is measured on a magnitude scale. The higher the magnitude, the less bright the object is. For example, Jupiter shines at about -2.5 in the night sky. The dimmest naked-eye object that we can see in the night sky (assuming we are looking someplace where it is dark, i.e., not Los Angeles) is 6th magnitude. Therefore, for the light from a lightbulb one light-year away to be 6th magnitude when it reaches Earth, the bulb would have to emit 10^27 watts of power. That is a billion, billion, billion watts.

Meanwhile, the faintest objects we can see with the Hubble Space Telescope, or the 10-meter Keck Telescopes are a few billion times fainter than what an unaided human eye (with a good vision) can see. While even these telescopes would not allow us to see a regular lightbulb placed one light-year away, they could easily detect a lightbulb on the Moon.

 

SCE Joins Caltech in Seismic Program to Improve Quake Response

SCE Contacts: Steve Conroy/Tom Boyd
(626) 302-2255
World Wide Web Address: http://www.sce.com
Caltech Contact: Max Benavidez
(626) 395-3226
World Wide Web Address: http://www.caltech.edu/~media
mb@caltech.edu

ROSEMEAD, Calif., Jan. 15, 1999—On the eve of the fifth anniversary of the devastating Northridge earthquake, Southern California Edison and the California Institute of Technology today announced the utility's participation in a state-of-the-art seismic measuring network that will expedite power restoration and emergency response after a major temblor in the southland.

As a participant in the TriNet Project, SCE will use a portion of its system of nearly 900 electrical substations to augment TriNet's growing network. Seismic sensoring devices, installed at selected substations, will be linked directly to TriNet through SCE's extensive communications network, which is built to withstand severe earthquakes.

When complete, TriNet will consist of nearly 600 monitoring stations in Southern California with the capability to provide faster information on where the most damaging shaking has occurred when earthquakes strike. SCE will be able to use that information to prioritize the dispatch of repair crews and accelerate service restoration efforts to areas suffering the most damage.

"Following an earthquake, good, accurate information is a precious commodity," said Stephen E. Frank, SCE president and chief operating officer, at a press conference today. "Good information can save time, money, and—most importantly—lives. We're excited about the potential benefits of TriNet, and as the largest electric utility in the region, we feel Edison is in a unique position to add value to the TriNet effort." Within 10 minutes of an event, TriNet will produce preliminary map information. Within 30 minutes, more detailed maps showing shaking intensity will be produced. The "shake maps" will give authorities an accurate indication of where utilities and authorities should concentrate recovery efforts.

Dick Rosenblum, SCE senior vice president for transmission & distribution, said TriNet will help the utility assess problems more quickly at the utility's nearly 900 electrical substations spread over a 50,000-square-mile area.

"By getting useful information in a matter of minutes, we can dispatch crews to where we know the greatest shaking and damage has occurred," said Rosenblum. "We knew fairly quickly where the Northridge earthquake was centered, but it was hours before we knew the degree of damage that—miles away and outside the San Fernando Valley—Santa Monica had experienced."

Paul Jennings, Caltech's acting vice president for business and finance, and a professor of civil engineering and applied mechanics, said, "The TriNet Project is a wonderful example of a public/private partnership, where different organizations come together, leverage their resources, and together create a product no one organization could create alone. Edison's investment will significantly move this project forward and help provide Southern California with a state-of-the-art seismic network."

SCE currently has installed TriNet monitoring units at substations in Rosemead, Palmdale, Hesperia, Mira Loma, and White Water. Another 25 substations will have the monitoring equipment installed within the next 18 months.

SCE also announced today it will provide $250,000 over five years for TriNet, with each dollar matched by a $3 contribution from the Federal Emergency Management Administration (FEMA) and the California Office of Emergency Services.

FEMA is funding 75 percent of the nearly $17-million TriNet Project. Caltech's commitment to the effort is being funded by SCE, GTE, Pacific Bell, the Times Mirror Foundation, and others. The U.S. Geological Survey has provided more than $4 million. The California Division of Mines and Geology is another participant.

An Edison International company, Southern California Edison is the nation's second largest investor-owned electric utility, serving more than 11 million people in a 50,000-square-mile area within central, coastal and Southern California.

Writer: 

Caltech Question of the Month: Is January 1, 2000, the first day of the last year of the 20th century, or the first day of the 21st century?

Submitted by Eileen Wise, Pasadena California, and answered by Dr. Kevin C. Knox, Ahmanson Postdoctoral Instructor in History at Caltech.

According to such august authorities as the U.S. Naval Observatory, the final day of the 20th century is December 31, 2000. Those who argue that January 1, 2001, must be the beginning of the third millenium do so on the grounds that there was no such thing as A.D. 0.The astronomer Dionysius Exiguus, who devised the Christian calendar in the sixth century A.D. (Anno Domini), went directly from 1 B.C. to A.D. 1.The probable reason that Dionysius did so is that the number zero had yet to be introduced into the Western world from India: at the time, astronomers and the like suffered through calculations using Roman numerals.

For this reason, advocates of "2001" contend that since the calendar began at A.D. 1, and since a millenium is 1000 years, all millennia begin with a year one.

Yet this declaration can be challenged. Some maintain that the true millenium has already come to pass, arguing that we now know that early Christian mathematicians miscalculated the birth of Jesus. Since Christ was most likely born around 4 BC, the second millenium should have ended in 1997.

The decision of when to celebrate the new millenium is perhaps best described as an aesthetic choice. The length of one year-that is, the time that it takes the earth to complete its orbit around the sun-is subject to extremely precise astronomical measurements. But deciding from when to count these years is, ultimately, arbitrary.

It seems most people will celebrate the advent of the new millenium on December 31, 1999. If you insist on adhering to the guidelines of the U.S. Naval Observatory you will probably be in the minority. However, given the predicted shortage of champagne for the end of this year, if you do wait until 2001 you will probably find it easier to secure sufficient quantities of bubbly to make it a festive affair.

Domesticated wolves may have given humans the leg up in conquering the early world

PASADENA—When early humans first encountered wolves after leaving Africa 140,000 years ago, the two species may have established a partnership that allowed Homo sapiens to eventually dominate the entire world, a Caltech biologist says in a new book.

According to John Allman, Hixon Professor of Psychobiology and professor of biology, recent DNA evidence from both modern dogs and humans suggests that the human departure from Africa occurred at roughly the same time as the domestication of wolves. Though his evidence is circumstantial, Allman writes in his new book Evolving Brains that the early partnership could have allowed Homo sapiens to displace the other competing hominids—the Neanderthals of Europe and Homo erectus of Southeast Asia—and proliferate throughout the habitable areas of the world.

"Several things came together," says Allman, who specializes in evolutionary biology. "Recently, Robert Wayne at UCLA has shown through mitochondrial DNA that dogs are basically domesticated wolves, and that their domestication occurred much earlier than previously thought—as much as 135,000 years ago.

"Other DNA evidence also shows that Homo sapiens first left Africa about 140,000 years ago," Allman continues. "And since there were no wolves in Africa and no modern humans in Eurasia before this time, I conjecture that the two species got together soon afterward and became remarkably successful hunting partners."

Allman notes that much of Europe was populated by the bigger, heartier Neanderthals when modern humans first left East Africa. The ancestors of Neanderthals also originated in Africa but migrated at a much earlier time, more than a million years ago.

But Homo sapiens and Neanderthals apparently were isolated during the next few hundred thousand years, until the former arrived from Africa.

Neanderthals in the meantime had evolved into more hearty creatures to deal with the harsher climate of Europe, but there is no evidence to suggest that they ever domesticated wolves. Nor is there evidence that Neanderthals ever bred with Homo sapiens.

Migrating even earlier from Africa were the hominids known as Homo erectus. These people departed from Africa about 2 million years ago, and like their close relatives the Neanderthals, continued to evolve when they reached their new habitats. But Homo erectus didn't do particularly well outside Africa, and by 140,000 B.C. was confined to Southeast Asia. And the possibility of Homo erectus domesticating wolves is a moot point, for wolves have never inhabited Southeast Asia.

Allman doesn't go so far as to suggest that the Homo sapiens–wolf partnership directly caused the extinction of Neanderthals and Homo erectus, but he nonetheless says that such a hunting collaboration would have made the two highly developed species an unbeatable combination. Thus, it could be that the partnership was a significant factor in making life more difficult for the other hominids, regardless of whether direct conflict occurred.

"Wolves and humans are two of the most geographically widespread and successful of all mammals," Allman says. "And wolves have a lot in common with early humans, especially in their tendency to prey on ungulates—that is, big meaty creatures with hooves—the stuff we dogs and humans still like to eat."

Too, wolves and early humans were virtually unique in their tendency to live in extended families, Allman says. In other words, all adult members of the social group participated in caring for offspring.

Even in the modern world, humans and wolves are two of the very few types of mammals that live in extended families in which the impetus exists to look out for the other fellow's welfare. Thus, it was easy for humans and domesticated wolves to accept each other as family/pack members.

As for the partnership itself, Allman says that humans got a good deal in that they were able to contend with the harsh climates of Eurasia after eons of balmy weather in Africa. Being a successful hunter of ungulates meant that humans had access to furs and skins for protection against whatever environments they found in their new habitats. And later, when humans took up agriculture, they again found they had a ready and willing ally to watch over the crops and domesticated livestock.

Allman thinks the DNA evidence for his hypothesis is persuasive, even though the notion of the collaboration could be falsified in several ways. For one, additional work on the DNA of modern dogs might show that the domestication of wolves occurred much sooner or much later than human migration from Africa into Asia.

But new DNA work could also strengthen the hypothesis if it shows a more detailed timeline for domestication. As for the archaeological evidence, any results showing that Neanderthals indeed domesticated dogs would be troublesome. But no such evidence has been uncovered so far.

On the other hand, Allman thinks the best endorsement of the hypothesis would come from new archaeological work in remote regions such as Siberia. The hypothesis would predict that the human alliance with dogs enabled humans to expand into these inhospitable areas and ultimately invade the New World. If evidence of domesticated wolves and dogs were found in Homo sapiens living sites some 20 to 50 thousand years old, then the argument would be stronger that humans indeed proliferated throughout the world with the cooperation of wolves.

Allman's book Evolving Brains is being published this week by Scientific American Library/W.H. Freeman. The book will be available in bookstores in time for Christmas.

Writer: 
Robert Tindol
Writer: 

Caltech Question of the Month: When a plane flies from New York to San Francisco, why can't it just idle in midair and wait for the earth to spin San Francisco around underneath it?

Submitted by Norman Arce, San Marino.

Answered by Dr. Andrew Ingersoll, Professor of Planetary Science, Caltech.

We don't feel it, but the planet is rotating eastward at a rate of about 1,000 miles per hour. Thus, it might make sense that you could rise in the air, stay in one spot, and wait for the West Coast to rotate underneath you in about three hours.

But practically speaking, this is what supersonic jet planes already do. If you were in a plane, you'd still have to contend with the winds that would be hitting you in the face at 1,000 miles per hour. This is true because friction causes the atmosphere to be dragged around by the solid surface of the planet. Bucking that sort of headwind is difficult, which is why you need the Concorde to do it.

But even if you got above the atmosphere, you'd still be carried eastward at 1,000 miles per hour by your own inertia. In effect, this would be just like jumping straight up while inside a moving train. If you've ever tried this, you know that you land in the same spot rather than several feet rearward. Thus, to go westward, you'd still have to fire your rockets to undo the eastward motion.

So there's no easy way to do it. Either way you have to burn a lot of jet fuel or rocket fuel to get there.

Writer: 
RT

New study explains motions of the Emerson fault in the years following the Landers earthquake

PASADENA—For geophysicists, the 7.3–magnitude Landers earthquake of June 28, 1992 has yielded much in terms of understanding the basic mechanisms of seismic events. A new study appearing in this week's Science provides a new model to explain why the ground near the fault gradually shifted the first few years after the main shock. The work could be used in the future for the analysis of earthquake hazard.

In the Science article, Jishu Deng, a postdoctoral researcher at the California Institute of Technology, and his coauthors attribute the postseismic deformation to a viscous flow in the lower crust. Experts have known for some time that such slow motions around faults can occur, and in fact were quite aware of the effect near the Emerson fault on which the Landers earthquake was centered. But no one knew whether the ground was moving in small, quirky steps or slowly flowing like a viscous liquid.

Analyzing existing data from various satellites, Deng speculates that viscous flow must be the case, even though the "afterslip model" has for some time been the preferred explanation. Deng believes the "viscoelastic model" is preferable because the satellite data shows both a horizontal motion along the Emerson fault over about three or four years, as well as a vertical motion. While the viscoelastic model is not completely new, previous studies have been unable to distinguish between the viscoelastic and afterslip models. The Landers earthquake, however, provides the first opportunity to determine which mechanism is indeed at work.

Specifically, the area just west of the north–south fault has continued to move northward since the initial rupture. On the day of the earthquake, the fault slippage was measured to be about five to six meters along the fault line. But the GPS satellites show that the displacement has gradually expanded another 10 centimeters or so.

This continued slippage can be explained by the prevailing theory of postseismic slippage, but an additional result calls for a new theory: according to information gained from the Interferometric Synethetic Aperture Radar satellite (the ERS-1), the ground to the west of the fault has also sunk by about 28 millimeters, while ground east of the fault has risen slightly. And because the afterslip model cannot explain this motion, Deng shows that the effect must be the result of viscous flow.

"So we think the fault is not slipping," says Deng, who came to Caltech after earning his doctorate at Columbia University. "It must be in a flow." Deng further says the new information could be used in the future to assess the seismic hazard in specific locales. "Our new calculations will lead to a new generation of stress evolution models and help people understand how stress builds up and releases in seismic areas."

The other authors of the paper are Michael Gurnis and Hiroo Kanamori, both professors of geophysics at Caltech; and Egill Hauksson, senior research associate in geophysics at Caltech.

Writer: 
Robert Tindol
Writer: 

Pages

Subscribe to RSS - research_news