New electron states observed by Caltech physicists

PASADENA—Caltech physicists have succeeded in forcing electrons to flow in an unusual way never previously observed in nature or in the lab.

According to James Eisenstein, professor of physics, he and his collaborators have observed electrons that, when confined to a two-dimensional plane and subjected to an intense magnetic field, can apparently tell the difference between "north-south" and "east-west" directions in their otherwise featureless environment. As such, the electrons are in a state very different from that of conventional isotropic solids, liquids, and gases.

"Electrons do bizarre and wonderful things in a magnetic field," says Eisenstein, explaining that electrons are elementary particles that naturally repel each other unless forced together.

By trapping billions of electrons on a flat surface within a semiconductor crystal wafer—and thus limiting them to two dimensions—Eisenstein's team is able to study what the electrons do at temperatures close to absolute zero and in the presence of large perpendicular magnetic fields.

Research on exotic states of electrons is relatively new, but its theoretical history goes back to the 1930s, when Eugene Wigner speculated that electrons in certain circumstances could actually form a sort of crystallized solid. It turns out that forcing electrons to lie in a two-dimensional plane increases the chances for such exotic configurations.

"They cannot get out of one another's way into the third dimension, and this actually increases the likelihood of unusual 'correlated' phases," Eisenstein says. Adding a magnetic field has a similar effect by forcing the electrons to move in tiny circular orbits rather than running unimpeded across the plane.

One of the best examples of the strange behavior of two-dimensional electron systems is the fractional quantum Hall effect, for which three American scientists won the Nobel Prize in physics last year. Electrons in such a system are essentially a liquid, and since the quantum effects of the subatomic world become a factor at such scales, the entire group takes on some unusual electrical properties.

Eisenstein's new findings are very different than the fractional quantum Hall effect. Most importantly, his group has found that a current sent one way through the flat plane of electrons tends to encounter much greater resistance than an equal current sent at a perpendicular angle. Normally, one would expect all the electrons to more or less disperse evenly across the flat plane, which would mean the same resistance for a current flowing at varying angles.

Dramatically, this "anisotropy" only sets in when the temperature of the electrons is reduced to within one-tenth of one degree above absolute zero, the lowest temperature a system can attain.

Owing to the laws of quantum mechanics, the circular orbits of the electrons exist only at discrete energies, called Landau levels. For the fractional quantum Hall effect, all of the electrons are in the lowest such level. Eisenstein's new results appear when the higher energy levels are also populated with electrons. While it appears that a minimum of three levels must be occupied, Eisenstein has seen the effects in many higher Landau levels.

"This generic aspect makes the new findings all the more important," comments Eisenstein.

One scheme that might explain the new results is that the electrons are accumulated into long ribbons. Physically, the system would somewhat resemble lines of billiard balls lying in parallel rows on a pool table. If this is what is happening, the Coulomb repulsion of the electrons is overwhelmed within the ribbons so that the electrons can cram more closely together, while in the spaces between the ribbons the number of electrons is reduced.

"There's not a good theoretical understanding of what's going on," Eisenstein says. "Some think such a 'charge-density wave' is at the heart; others think a more appropriate analogy might be the liquid crystal displays in a digital watch."

Another interesting question that could have deep underpinnings is how and why the system "chooses" its particular alignments. The alignment could have to do with the crystal substrate in the wafer, but Eisenstein says this is not clear.

Eisenstein and his collaborators are proceeding with their work, and have recently published results in the January 11 issue of the journal Physical Review Letters.

Heavily involved in the work are Mike Lilly, a Caltech postdoctoral scholar; and Ken Cooper, a Caltech graduate student in physics. Loren Pfeiffer and Ken West—both of Bell Laboratories, Lucent Technologies in Murray Hill, New Jersey—contribute the essential high-purity semiconductor wafers used in the experiments.

Writer: 
Robert Tindol
Writer: 

Caltech Question of the Month: If a lightbulb were one light-year away, how many watts would it have to be for us the see it with the naked eye?

Submitted by R. Anderson of Pomona, California, and answered by Dr. George Djorgovski, Professor of Astronomy

Star brightness is measured on a magnitude scale. The higher the magnitude, the less bright the object is. For example, Jupiter shines at about -2.5 in the night sky. The dimmest naked-eye object that we can see in the night sky (assuming we are looking someplace where it is dark, i.e., not Los Angeles) is 6th magnitude. Therefore, for the light from a lightbulb one light-year away to be 6th magnitude when it reaches Earth, the bulb would have to emit 10^27 watts of power. That is a billion, billion, billion watts.

Meanwhile, the faintest objects we can see with the Hubble Space Telescope, or the 10-meter Keck Telescopes are a few billion times fainter than what an unaided human eye (with a good vision) can see. While even these telescopes would not allow us to see a regular lightbulb placed one light-year away, they could easily detect a lightbulb on the Moon.

 

SCE Joins Caltech in Seismic Program to Improve Quake Response

SCE Contacts: Steve Conroy/Tom Boyd
(626) 302-2255
World Wide Web Address: http://www.sce.com
Caltech Contact: Max Benavidez
(626) 395-3226
World Wide Web Address: http://www.caltech.edu/~media
mb@caltech.edu

ROSEMEAD, Calif., Jan. 15, 1999—On the eve of the fifth anniversary of the devastating Northridge earthquake, Southern California Edison and the California Institute of Technology today announced the utility's participation in a state-of-the-art seismic measuring network that will expedite power restoration and emergency response after a major temblor in the southland.

As a participant in the TriNet Project, SCE will use a portion of its system of nearly 900 electrical substations to augment TriNet's growing network. Seismic sensoring devices, installed at selected substations, will be linked directly to TriNet through SCE's extensive communications network, which is built to withstand severe earthquakes.

When complete, TriNet will consist of nearly 600 monitoring stations in Southern California with the capability to provide faster information on where the most damaging shaking has occurred when earthquakes strike. SCE will be able to use that information to prioritize the dispatch of repair crews and accelerate service restoration efforts to areas suffering the most damage.

"Following an earthquake, good, accurate information is a precious commodity," said Stephen E. Frank, SCE president and chief operating officer, at a press conference today. "Good information can save time, money, and—most importantly—lives. We're excited about the potential benefits of TriNet, and as the largest electric utility in the region, we feel Edison is in a unique position to add value to the TriNet effort." Within 10 minutes of an event, TriNet will produce preliminary map information. Within 30 minutes, more detailed maps showing shaking intensity will be produced. The "shake maps" will give authorities an accurate indication of where utilities and authorities should concentrate recovery efforts.

Dick Rosenblum, SCE senior vice president for transmission & distribution, said TriNet will help the utility assess problems more quickly at the utility's nearly 900 electrical substations spread over a 50,000-square-mile area.

"By getting useful information in a matter of minutes, we can dispatch crews to where we know the greatest shaking and damage has occurred," said Rosenblum. "We knew fairly quickly where the Northridge earthquake was centered, but it was hours before we knew the degree of damage that—miles away and outside the San Fernando Valley—Santa Monica had experienced."

Paul Jennings, Caltech's acting vice president for business and finance, and a professor of civil engineering and applied mechanics, said, "The TriNet Project is a wonderful example of a public/private partnership, where different organizations come together, leverage their resources, and together create a product no one organization could create alone. Edison's investment will significantly move this project forward and help provide Southern California with a state-of-the-art seismic network."

SCE currently has installed TriNet monitoring units at substations in Rosemead, Palmdale, Hesperia, Mira Loma, and White Water. Another 25 substations will have the monitoring equipment installed within the next 18 months.

SCE also announced today it will provide $250,000 over five years for TriNet, with each dollar matched by a $3 contribution from the Federal Emergency Management Administration (FEMA) and the California Office of Emergency Services.

FEMA is funding 75 percent of the nearly $17-million TriNet Project. Caltech's commitment to the effort is being funded by SCE, GTE, Pacific Bell, the Times Mirror Foundation, and others. The U.S. Geological Survey has provided more than $4 million. The California Division of Mines and Geology is another participant.

An Edison International company, Southern California Edison is the nation's second largest investor-owned electric utility, serving more than 11 million people in a 50,000-square-mile area within central, coastal and Southern California.

Writer: 

Caltech Question of the Month: Is January 1, 2000, the first day of the last year of the 20th century, or the first day of the 21st century?

Submitted by Eileen Wise, Pasadena California, and answered by Dr. Kevin C. Knox, Ahmanson Postdoctoral Instructor in History at Caltech.

According to such august authorities as the U.S. Naval Observatory, the final day of the 20th century is December 31, 2000. Those who argue that January 1, 2001, must be the beginning of the third millenium do so on the grounds that there was no such thing as A.D. 0.The astronomer Dionysius Exiguus, who devised the Christian calendar in the sixth century A.D. (Anno Domini), went directly from 1 B.C. to A.D. 1.The probable reason that Dionysius did so is that the number zero had yet to be introduced into the Western world from India: at the time, astronomers and the like suffered through calculations using Roman numerals.

For this reason, advocates of "2001" contend that since the calendar began at A.D. 1, and since a millenium is 1000 years, all millennia begin with a year one.

Yet this declaration can be challenged. Some maintain that the true millenium has already come to pass, arguing that we now know that early Christian mathematicians miscalculated the birth of Jesus. Since Christ was most likely born around 4 BC, the second millenium should have ended in 1997.

The decision of when to celebrate the new millenium is perhaps best described as an aesthetic choice. The length of one year-that is, the time that it takes the earth to complete its orbit around the sun-is subject to extremely precise astronomical measurements. But deciding from when to count these years is, ultimately, arbitrary.

It seems most people will celebrate the advent of the new millenium on December 31, 1999. If you insist on adhering to the guidelines of the U.S. Naval Observatory you will probably be in the minority. However, given the predicted shortage of champagne for the end of this year, if you do wait until 2001 you will probably find it easier to secure sufficient quantities of bubbly to make it a festive affair.

Domesticated wolves may have given humans the leg up in conquering the early world

PASADENA—When early humans first encountered wolves after leaving Africa 140,000 years ago, the two species may have established a partnership that allowed Homo sapiens to eventually dominate the entire world, a Caltech biologist says in a new book.

According to John Allman, Hixon Professor of Psychobiology and professor of biology, recent DNA evidence from both modern dogs and humans suggests that the human departure from Africa occurred at roughly the same time as the domestication of wolves. Though his evidence is circumstantial, Allman writes in his new book Evolving Brains that the early partnership could have allowed Homo sapiens to displace the other competing hominids—the Neanderthals of Europe and Homo erectus of Southeast Asia—and proliferate throughout the habitable areas of the world.

"Several things came together," says Allman, who specializes in evolutionary biology. "Recently, Robert Wayne at UCLA has shown through mitochondrial DNA that dogs are basically domesticated wolves, and that their domestication occurred much earlier than previously thought—as much as 135,000 years ago.

"Other DNA evidence also shows that Homo sapiens first left Africa about 140,000 years ago," Allman continues. "And since there were no wolves in Africa and no modern humans in Eurasia before this time, I conjecture that the two species got together soon afterward and became remarkably successful hunting partners."

Allman notes that much of Europe was populated by the bigger, heartier Neanderthals when modern humans first left East Africa. The ancestors of Neanderthals also originated in Africa but migrated at a much earlier time, more than a million years ago.

But Homo sapiens and Neanderthals apparently were isolated during the next few hundred thousand years, until the former arrived from Africa.

Neanderthals in the meantime had evolved into more hearty creatures to deal with the harsher climate of Europe, but there is no evidence to suggest that they ever domesticated wolves. Nor is there evidence that Neanderthals ever bred with Homo sapiens.

Migrating even earlier from Africa were the hominids known as Homo erectus. These people departed from Africa about 2 million years ago, and like their close relatives the Neanderthals, continued to evolve when they reached their new habitats. But Homo erectus didn't do particularly well outside Africa, and by 140,000 B.C. was confined to Southeast Asia. And the possibility of Homo erectus domesticating wolves is a moot point, for wolves have never inhabited Southeast Asia.

Allman doesn't go so far as to suggest that the Homo sapiens–wolf partnership directly caused the extinction of Neanderthals and Homo erectus, but he nonetheless says that such a hunting collaboration would have made the two highly developed species an unbeatable combination. Thus, it could be that the partnership was a significant factor in making life more difficult for the other hominids, regardless of whether direct conflict occurred.

"Wolves and humans are two of the most geographically widespread and successful of all mammals," Allman says. "And wolves have a lot in common with early humans, especially in their tendency to prey on ungulates—that is, big meaty creatures with hooves—the stuff we dogs and humans still like to eat."

Too, wolves and early humans were virtually unique in their tendency to live in extended families, Allman says. In other words, all adult members of the social group participated in caring for offspring.

Even in the modern world, humans and wolves are two of the very few types of mammals that live in extended families in which the impetus exists to look out for the other fellow's welfare. Thus, it was easy for humans and domesticated wolves to accept each other as family/pack members.

As for the partnership itself, Allman says that humans got a good deal in that they were able to contend with the harsh climates of Eurasia after eons of balmy weather in Africa. Being a successful hunter of ungulates meant that humans had access to furs and skins for protection against whatever environments they found in their new habitats. And later, when humans took up agriculture, they again found they had a ready and willing ally to watch over the crops and domesticated livestock.

Allman thinks the DNA evidence for his hypothesis is persuasive, even though the notion of the collaboration could be falsified in several ways. For one, additional work on the DNA of modern dogs might show that the domestication of wolves occurred much sooner or much later than human migration from Africa into Asia.

But new DNA work could also strengthen the hypothesis if it shows a more detailed timeline for domestication. As for the archaeological evidence, any results showing that Neanderthals indeed domesticated dogs would be troublesome. But no such evidence has been uncovered so far.

On the other hand, Allman thinks the best endorsement of the hypothesis would come from new archaeological work in remote regions such as Siberia. The hypothesis would predict that the human alliance with dogs enabled humans to expand into these inhospitable areas and ultimately invade the New World. If evidence of domesticated wolves and dogs were found in Homo sapiens living sites some 20 to 50 thousand years old, then the argument would be stronger that humans indeed proliferated throughout the world with the cooperation of wolves.

Allman's book Evolving Brains is being published this week by Scientific American Library/W.H. Freeman. The book will be available in bookstores in time for Christmas.

Writer: 
Robert Tindol
Writer: 

Caltech Question of the Month: When a plane flies from New York to San Francisco, why can't it just idle in midair and wait for the earth to spin San Francisco around underneath it?

Submitted by Norman Arce, San Marino.

Answered by Dr. Andrew Ingersoll, Professor of Planetary Science, Caltech.

We don't feel it, but the planet is rotating eastward at a rate of about 1,000 miles per hour. Thus, it might make sense that you could rise in the air, stay in one spot, and wait for the West Coast to rotate underneath you in about three hours.

But practically speaking, this is what supersonic jet planes already do. If you were in a plane, you'd still have to contend with the winds that would be hitting you in the face at 1,000 miles per hour. This is true because friction causes the atmosphere to be dragged around by the solid surface of the planet. Bucking that sort of headwind is difficult, which is why you need the Concorde to do it.

But even if you got above the atmosphere, you'd still be carried eastward at 1,000 miles per hour by your own inertia. In effect, this would be just like jumping straight up while inside a moving train. If you've ever tried this, you know that you land in the same spot rather than several feet rearward. Thus, to go westward, you'd still have to fire your rockets to undo the eastward motion.

So there's no easy way to do it. Either way you have to burn a lot of jet fuel or rocket fuel to get there.

Writer: 
RT

New study explains motions of the Emerson fault in the years following the Landers earthquake

PASADENA—For geophysicists, the 7.3–magnitude Landers earthquake of June 28, 1992 has yielded much in terms of understanding the basic mechanisms of seismic events. A new study appearing in this week's Science provides a new model to explain why the ground near the fault gradually shifted the first few years after the main shock. The work could be used in the future for the analysis of earthquake hazard.

In the Science article, Jishu Deng, a postdoctoral researcher at the California Institute of Technology, and his coauthors attribute the postseismic deformation to a viscous flow in the lower crust. Experts have known for some time that such slow motions around faults can occur, and in fact were quite aware of the effect near the Emerson fault on which the Landers earthquake was centered. But no one knew whether the ground was moving in small, quirky steps or slowly flowing like a viscous liquid.

Analyzing existing data from various satellites, Deng speculates that viscous flow must be the case, even though the "afterslip model" has for some time been the preferred explanation. Deng believes the "viscoelastic model" is preferable because the satellite data shows both a horizontal motion along the Emerson fault over about three or four years, as well as a vertical motion. While the viscoelastic model is not completely new, previous studies have been unable to distinguish between the viscoelastic and afterslip models. The Landers earthquake, however, provides the first opportunity to determine which mechanism is indeed at work.

Specifically, the area just west of the north–south fault has continued to move northward since the initial rupture. On the day of the earthquake, the fault slippage was measured to be about five to six meters along the fault line. But the GPS satellites show that the displacement has gradually expanded another 10 centimeters or so.

This continued slippage can be explained by the prevailing theory of postseismic slippage, but an additional result calls for a new theory: according to information gained from the Interferometric Synethetic Aperture Radar satellite (the ERS-1), the ground to the west of the fault has also sunk by about 28 millimeters, while ground east of the fault has risen slightly. And because the afterslip model cannot explain this motion, Deng shows that the effect must be the result of viscous flow.

"So we think the fault is not slipping," says Deng, who came to Caltech after earning his doctorate at Columbia University. "It must be in a flow." Deng further says the new information could be used in the future to assess the seismic hazard in specific locales. "Our new calculations will lead to a new generation of stress evolution models and help people understand how stress builds up and releases in seismic areas."

The other authors of the paper are Michael Gurnis and Hiroo Kanamori, both professors of geophysics at Caltech; and Egill Hauksson, senior research associate in geophysics at Caltech.

Writer: 
Robert Tindol
Writer: 

Caltech physicists achieve first bona fide quantum teleportation

PASADENA—Physicists at the California Institute of Technology, joined by an international collaboration, have succeeded in the first true teleportation of a quantum state.

In the October 23 issue of the journal Science, Caltech physics professor H. Jeff Kimble and his colleagues write of their success in transporting a quantum state of light from one side of an optical bench to the other without it traversing any physical medium in between.

In this sense, quantum teleportation is similar to the far-fetched "transporter" technology used in the television series Star Trek. In place of the actual propagation of a light beam, teleportation makes use of a delicate quantum mechanical phenomenon known as "quantum entanglement," the quintessential ingredient in the emerging field of quantum information science.

"In our case the distance was only a meter, but the scheme would work just as well over much larger distances," says Professor Samuel Braunstein, a coauthor from the University of Wales in Bangor, United Kingdom, who, with Kimble, conceived the scheme. "Our work is an important step toward the realization of networks for distributing quantum information—a kind of 'quantum Internet.'"

Teleportation of this kind was first proposed theoretically by IBM scientist Charles H. Bennett and colleagues in 1993. The Caltech experiment represents the first time quantum teleportation has actually been performed with a high degree of "fidelity." The fidelity describes how well a receiver, "Bob," can reproduce quantum states from a sender, "Alice."

Although quantum teleportation was recently announced by two independent labs in Europe, neither experiment achieved a fidelity that unambiguously required the use of quantum entanglement between Alice and Bob.

"True quantum teleportation involves an unknown quantum state entering Alice's apparatus and a similar unknown state emerging from Bob's remote station," says Kimble. "Moreover, the similarity of input and output, as quantified by the fidelity, must exceed that which would be possible if Alice and Bob only communicated by classical means—for instance, by normal telephone wiring.

"Although there has been wonderful progress in the field, until now there has not been an actual demonstration of teleportation that meets these criteria."

In the experiment, the Caltech team generated exotic forms of light known as "squeezed vacua," which are split in such a way that Alice and Bob each receive a beam that is the quantum mechanical "twin" of the other. These EPR beams, named after the historic Einstein-Podolsky-Rosen (EPR) paradox of 1935, are among the strangest of the predictions of quantum mechanics. It was their theoretical possibility that led Einstein to reject the idea that quantum mechanics might be a fundamental physical law.

A trademark of quantum mechanics is that the very act of measurement limits the controllability of light in ways not observed in the macroscopic world: even the most delicate measurements can cause uncontrollable disturbances. Nevertheless, in certain circumstances, these restrictions can be exploited to do things that were unimaginable in classical physics.

Here, photons from the EPR beams delivered to Alice and Bob can share information that has no independent existence in either beam alone. Through this "entanglement," the act of measurement in one place can influence the quantum state of light in another.

Once Alice and Bob have received their spatially separate but entangled components of the EPR beams, Alice performs certain joint measurements on the light beam she wishes to teleport together with her half of the EPR "twins." This destroys the input beam, but she then sends her measurement outcomes to Bob via a "classical" communication channel. Bob uses this classical information to transform his component of the EPR beam into an output beam that closely mimics the input to Alice, resurrecting at a distance the original unknown quantum state.

A unique feature of Kimble's experiment is a third party called "Victor," who "verifies" various aspects of the protocol performed by Alice and Bob. It is Victor who generates and sends an input to Alice for teleportation, and who afterward inspects the output from Bob to judge its fidelity with the original input.

"The situation is akin to having a sort of 'quantum' telephone company managed by Alice and Bob," says Kimble. "Having opened an account with an agreed upon protocol, a customer (here Victor) utilizes the services of Alice and Bob unconditionally for the teleportation of quantum states without revealing these states to the company. Victor can further perform an independent assessment of the 'quality' of the service provided by Alice and Bob."

The experiment by the Kimble group shows that the strange "connections" between entities in the quantum realm can be gainfully employed for tasks that have no counterpart in the classical world known to our senses.

"Taking quantum teleportation from a purely theoretical concept to an actual experiment brings the quantum world a little closer to our everyday lives," says Christopher Fuchs, a Prize Postdoctoral Scholar at Caltech and a coauthor. "Since the earliest days of the theory, physicists have treated the quantum world as a great mystery. Maybe making it part of our everyday business is just what's been needed for making a little sense of it."

This demonstration of teleportation follows other work the Kimble group has done in recent years, including the first results showing that individual photons can strongly interact to form a quantum logic gate. Kimble's work suggests that the quantum nature of light may someday be exploited for building a quantum computer, a machine that would in certain applications have computational power vastly superior to that of present-day "classical" computers.

Writer: 
Robert Tindol
Writer: 

Multilayered silicon could bea breakthrough for electronic technology

PASADENA—Researchers at the California Institute of Technology have found a way to stack silicon layers on chips in a way that could lead to significant new advances in silicon-based electronic devices.

In the October issue of the Journal of Vacuum Science and Technology B, Caltech's Fletcher Jones Professor of Applied Physics Thomas McGill and his colleagues report on their work growing a novel silicon structure through a process known as molecular beam epitaxy.

The process begins with an existing silicon wafer, onto which an insulating layer of cerium dioxide just a few atoms thick is grown. Finally, a single crystal of silicon can be grown back onto the cerium dioxide.

The end result is a three-dimensional device using cerium dioxide as an insulator with crystalline silicon on top. Beginning with this top layer of silicon, the wafer is then ready to begin the process again. In this manner, layer upon layer of devices may be grown one after another on the same chip.

"The implications are very significant," says McGill. "For years there have been predictions that progress will eventually stop in silicon electronics because the devices will have been shrunk as much as they can.

"But this new technology could allow you to get the functionality increase by stacking instead of shrinking," he says.

McGill says the group has stacked only a single extra layer of silicon so far. However, the key is the demonstration that the cerium oxide is indeed acting as an insulator, and that the silicon on top is single crystalline and suitable for further growth.

"In principle, you can stack forever," McGill says.

According to Caltech grad student Joel Jones, another member of the team, this technique is especially interesting because it also allows the fabrication of a new group of novel silicon devices.

"We've already fabricated a primitive tunnel switched diode from the multilayered chips," Jones explains. "This is a single device that exhibits memory. At a given voltage, you can have two different stable currents depending on how you've switched the device."

This phenomenon is called negative differential resistance, allowing two current states of different amperage to exist at the same voltage.

Similar effects can be found in other devices enabled by this new technique, including resonant tunneling diodes. These devices can be exploited for novel memory storage, as well as used to enhance the performance of numerous other microelectronic circuits.

"The silicon industry is a $100 billion industry," says McGill. "This could be a major contributor in 10 to 15 years."

In addition to McGill and Jones, the authors of the paper are Edward Timothy Croke, Carol M. Garland, and Ogden Marsh.

Writer: 
Robert Tindol
Writer: 

Galileo data shows Jupiter's lightning associated with low-pressure regions

MADISON, Wisconsin—Images of Jupiter's night side taken by the Galileo spacecraft reveal that the planet's lightning is controlled by the large-scale atmospheric circulation and is associated with low-pressure regions.

The new findings were reported October 13, 1998 by Andrew Ingersoll at the 30th annual meeting of the American Astronomical Society's Division for Planetary Sciences.

"Lightning is an indicator of convection and precipitation," says Ingersoll, a professor of planetary science at the California Institute of Technology and member of the Galileo Imaging Team. "These processes are the main sources of atmospheric energy, both on Earth and on Jupiter."

\n

\n

In a terrestrial hurricane, Ingersoll explains, the low pressure at the center draws air in along the ocean surface, where it picks up moisture. Energy is relased when the moisture condenses and falls out as rain.

\n

\n

On Jupiter, energy is transferred from the warm interior of the planet to the visible atmosphere in a similar process. The new findings show that lightning occurs in the low-pressure regions on Jupiter, too.

"On both planets, the air spins counterclockwise around a low in the northern hemisphere and clockwise around a low in the southern hemisphere," Ingersoll says. "The lows are called cyclones and the highs are called anticyclones."

On Jupiter the cyclones are amorphous, turbulent regions that are spread out in the east-west direction. In the Voyager movies they spawn rapidly expanding bright clouds that look like huge thunderstorms. The Galileo lightning data confirm that convection is occurring there.

"We even caught one of these bright clouds on the day side and saw it flashing away on the night side less than two hours later," says Ingersoll.

In contrast, the Jovian anticyclones tend to be long-lived, stable, and oval-shaped. The Great Red Spot is the best example (it is three times the size of Earth and has been around for at least 100 years), but it has many smaller cousins. No lightning was seen coming from the anticyclones.

"That probably means that the anticyclones are not drawing energy from below by convection," says Ingersoll. "They are not acting like Jovian hurricanes."

Instead, the anticyclones maintain themselves by merging with the smaller structures that get spun out of the cyclones. "That's what we see in the Voyager movies, and the Galileo lightning data bear it out. Whether the precipitation is rain or snow is uncertain," says Ingersoll.

"Models of terrestrial lightning suggest that to build up electrical charge, both liquid water and ice have to be present. Rain requires a relatively wet Jupiter, and that's a controversial subject.

"Water is hard to detect from the outside because it is hidden below the ammonia clouds. And the Galileo probe hit a dry spot where we didn't expect much water."

Fortunately the Galileo imaging system caught glimpses of a cloud so deep it has to be water, according to findings to be reported at the conference by Dr. Don Banfield of Cornell University and an imaging team affiliate. Banfield showed images of the water cloud near the convective centers in the cyclonic regions.

These results appear in the September issue of Icarus, the International Journal of Solar System Studies.

"We know the water is there, and we know where it's raining," says Ingersoll. "This is a big step toward understanding how Jupiter's weather gets its energy."

 

Writer: 
Robert Tindol
Writer: 

Pages

Subscribe to RSS - research_news