Posthumous Paper by Gene Shoemaker Details Evidence of Comet Shower That Pummeled Earth 36 Million Years Ago

PASADENA—Geochemical evidence from a rock quarry in northern Italy indicates that a shower of comets hit Earth about 36 million years ago.

The findings not only account for the huge craters at Popagai in Siberia and at Chesapeake Bay in Maryland, but posit that they were but a tiny fraction of the comets active during a period of two or three million years during the late Eocene period. The work provides indirect evidence that a gravitational perturbation of the Oort comet cloud outside the orbit of Pluto was responsible for sending a wave of comets swarming toward the center of the solar system.

In a paper published today in the journal Science, a group from the California Institute of Technology, the U.S. Geological Survey Flagstaff office, and the Coldigioco Geological Observatory in Italy, report their evidence of a very large increase in the amount of extraterrestrial dust hitting Earth in the late Eocene period. The writers include the husband-and-wife team of Gene and Carolyn Shoemaker. Gene Shoemaker died in a car crash last year while the research was in progress.

According to lead author Ken Farley, a geochemist at Caltech, the contribution of Shoemaker was especially crucial in the breakthrough.

"Basically, Gene saw my earlier work and recognized it as a new way to test an important question: are large impact craters on Earth produced by collisions with comets or asteroids," Farley says.

"He suggested we study a quarry near Massignano, Italy, where seafloor deposits record debris related to the large impact events 36 million years ago. He said that if there had been a comet shower, the technique I've been working on might show it clearly in these sediments."

Carolyn Shoemaker said that she and her husband went to Italy last year to perform field work in support of the paper.

"Gene was pretty excited about the work Ken was doing," she said. "He was glad Ken was taking it on. It's exciting work, and it's a rather new type of work."

The matter involved detecting the helium isotope known as 3He, which is rare on Earth but common in extraterrestrial materials. 3He is very abundant in the sun, and some of it is ejected from the sun as solar wind throughout the solar system. The helium is easily picked up and carried along by extraterrestrial objects such as asteroids and comets and their associated dust particles.

Thus, arrival of extraterrestrial matter on Earth's surface can be detected by measuring its associated 3He. And even this material is unlikely to include large objects like asteroids and comets. Because these heavy, solid objects fall into the atmosphere with a high velocity, they melt or vaporize, giving their helium up to the atmosphere. This 3He never falls below very high altitudes, and soon reenters space.

But tiny particles entering the atmosphere are another story. These particles can pass through the atmosphere at low temperatures, and so retain helium. These particles accumulate on the seafloor, and seafloor sediments provide an archive of these particles going back hundreds of millions of years.

Elevated levels of 3He would suggest an unusually dusty inner solar system, possibly because of enhanced abundances of active comets. Such an elevated abundance of comets might arise when a passing star or other gravity anomaly kicks a huge number of comets from the Oort cloud into elliptical, sun-approaching orbits.

When Farley took Shoemaker's suggestion and traveled to the Italian quarry, he discovered that there was indeed an elevated flux of 3He-laced materials in a sedimentary layer some 50 feet beneath the surface. Because this region of Italy was submerged in water until about 10 million years ago, the comet impacts and microscopic debris had accumulated on the ocean bed, and this debris was preserved because dying organisms had cooperatively covered the debris over the eons.

The depth of the sedimentary layer suggested to the researchers that the 3He had been deposited about 36 million years ago. This corresponds to the dating of the craters at Popagai and Chesapeake Bay.

More precisely, the 3He measurements show enhanced solar system dustiness associated with the impacts 36 million years ago, but with the dustiness beginning 0.5 million years before the impacts and continuing for about 1.5 million years after. The conclusion is that there were a large number of Earth-crossing comets and much dust from their tails for a period of about 2.5 million years.

In addition to Gene and Carolyn Shoemaker and Ken Farley, the paper was cowritten by Alessandro Montanari, who holds joint appointments at the Coldigioco Geological Observatory in Apiro, Italy, and the School of Mines in Paris.

Robert Tindol

Gamma-ray Burst Found To Be Most Energetic Event in Universe

PASADENA—A team of astronomers from the California Institute of Technology announced today that a recently detected cosmic gamma-ray burst was as bright as the rest of the universe, releasing a hundred times more energy than previously theorized.

The team has measured the distance to a faint galaxy from which the burst, designated GRB 971214, originated. It is about 12 billion light-years from Earth (one light-year is approximately 5.9 trillion miles.) Combined with the observed brightness of the burst, this large distance implies an enormous energy release. The team's findings appear in the May 7 issue of the scientific journal Nature.

"The energy released by this burst in its first few seconds staggers the imagination," said Caltech professor Shrinivas Kulkarni, one of the two principal investigators on the team. The burst appears to have released several hundred times more energy than an exploding star, called a supernova, until now the most energetic known phenomenon in the universe.

"For about one or two seconds, this burst was as luminous as all the rest of the entire universe," said Caltech professor George Djorgovski, the other principal investigator on the team.

Finding such a large energy release over such a brief period of time is unprecedented in astronomy, except for the Big Bang itself.

"In a region about a hundred miles across, the burst created conditions like those in the early universe, about one millisecond [1/1,000 of a second] direction in the sky: unlike visible light, gamma-rays are exceedingly difficult to observe with a telescope, and the bursts' short duration exacerbates the problem. The Italian/Dutch satellite BeppoSAX, launched in 1996, had the ability to localize the bursts on the celestial sphere with a sufficient precision to permit follow-up observations with the world's most powerful ground-based telescopes.

The image shows the same field as seen about two months later, after the burst afterglow has faded away, revealing a faint galaxy at its position (also marked with an arrow).

This breakthrough led to the discovery of long-lived "afterglows" of bursts in X-rays, visible and infrared light, and radio waves. While gamma-ray bursts last only a few seconds, their afterglows can be studied for several months. This, in turn, led to the discovery that the bursts do not originate within our own galaxy, the Milky Way, but rather are associated with high-redshift, extremely distant galaxies in the universe.

The gamma-ray burst was detected on December 14, 1997, by the BeppoSAX and CGRO satellites. BeppoSAX and NASA's Rossi X-ray Timing Explorer spacecraft detected an X-ray afterglow. BeppoSAX precision led to the detection of a visible light afterglow, found by a team from Columbia University and Dartmouth College, including Professors Jules Halpern, David Helfand, John Torstensen, and their collaborators, using a 2.4-meter telescope at Kitt Peak, Az., but no distance could be measured from these observations.

As the visible light from the burst afterglow faded, the Caltech team detected an extremely faint galaxy at its location, using one of the world's largest telescopes, Caltech's W. M. Keck Observatory's 10-meter Keck II telescope, on Mauna Kea, Hawaii. The galaxy is about as faint as an ordinary 100-watt light bulb would be as seen from a distance of a million miles.

Subsequent images taken with the Hubble Space Telescope confirmed the association of the burst afterglow with this faint galaxy.

The Caltech team succeeded in measuring the distance to this galaxy, using the light-gathering power of the Keck II telescope. The galaxy is at a redshift of z=3.4, or about 12 billion light-years distant (assuming the universe to be about 14 billion years old).

From the distance and the observed brightness of the burst, astronomers derived the amount of energy released in the flash. Although the burst only lasted a few seconds, the energy released was hundreds of times larger than the energy given out in supernova explosions, and it is about equal to the amount of energy radiated by our entire galaxy over a period of a couple of centuries.

This is only the energy seen in the gamma rays. It is possible that other forms of radiation, such as neutrinos or gravity waves, which are extremely difficult to detect, carried a hundred times more energy than that.

While the origin of the bursts remains a mystery, what happens to the burst's glowing remnant appears to be reasonably well understood, within the so-called cosmic-fireball model. The observations of the burst afterglow by the Caltech team helped determine in some detail its physical parameters.

"It is gratifying to see that we do have some theoretical understanding of this remarkable phenomenon," said Kulkarni.

In addition to Professors Kulkarni and Djorgovski, the team includes Dr. Dale Frail from the National Radio Astronomy Observatory in Socorro, New Mexico; Drs. A. N. Ramaprakash, Tom Kundic, Stephen Odewahn, and Lori Lubin from Caltech; Dr. Mark Dickinson from the Johns Hopkins University, in Baltimore, Maryland; Dr. Robert Goodrich from the W. M. Keck Observatory in Hawaii; graduate students Joshua Bloom and Kurt Adelberger from Caltech; and many others.

Full scale images of the GRB 971214 field are available.

Robert Tindol

Geophysicists model the Cretaceous motions of Australia

PASADENA--The theory of plate tectonics says that Earth's crust has moved horizontally by thousands of miles over millions of years. For visual evidence, one need look no further than a map showing how nicely South America and Africa fit together.

But plate tectonics also literally has its ups and downs. In addition to the horizontal motions that long ago tore South America and Africa asunder, plate tectonics can also cause entire land masses to steadily rise and lower by thousands of feet. These vertical motions are driven by the internal planetary heat engine that makes the plates slide around horizontally.

One of the best examples of this vertical motion is Australia, and researchers have now completed a three-dimensional model of Earth's internal heat engine that provides the best-ever description of the process. The research is published in the current issue of Science.

According to Caltech geophysicist Michael Gurnis, the lead author of the paper, the research is aimed at improving the understanding of how heat convection in the mantle relates to the motions of the crustal plates. More specifically, the work answers nagging questions about oddities in the Australian plate.

"Normally, the ridges you see in the ocean floor are associated with upwellings of hot material from the mantle," Gurnis says. But in the ocean between Australia and Antarctica, researchers have long noted a downwelling at this mid-ocean ridge.

According to Gurnis, "We don't know why there's a downwelling, but it's at least 20 million years old and probably a few hundred million years old."

Also, Gurnis explains, there is longstanding evidence that Australia's seacoast significantly retreated during the Cretaceous period (about 60 to 125 million years ago). This was caused by the aforementioned vertical tectonic motions, which bowed the entire continent upward.

The result was a continent with terrain that at times was well above sea level—and thus dry—but at other times relatively low and thus inundated with water.

For example, the maximum flooding of Australia occurred about 120 million years ago and covered more than half the present land mass with water. But what has long perplexed geologists was that 70 million years ago, while nearly half the surface area of other continents (including the Americas, Europe, and Russia) was covered by shallow inland seas, Australia was high and dry!

The achievement of Gurnis and his colleagues is to model these risings and fallings of continents very precisely, so that a vertical level can be "predicted" for a certain time period.

Also, their dynamic models show that the entire effect was caused by a plate that subducted (or passed under) Australia, later stagnated in the mantle hundreds of miles below Earth's surface, and is now being drawn back up by the South East Indian Ridge.

"This is one of the best examples of this process, which is called 'continental epeirogeny,'" Gurnis says. "As such, it is an ideal place for tying vertical motion to how Earth's heat engine works."

The dynamic models published by Gurnis and his colleagues predict a downwelling between Australia and Antarctica in precisely the position it is observed. "The correspondence between sea-floor topography, chemistry, and crustal thickness is impressive," says Gurnis.

The work represents a breakthrough because of the ability to predict the timing and geography of geologic events separated widely in space and time. Gurnis suggests that this work opens up a huge new frontier in which the motions of plates can be predicted from computer models of the Earth's internal heat engine, in much the same way that scientists use sophisticated global circulation models to study climate change.

Says Gurnis, "Such models can be tested against the vast library of Earth's history locked in the geologic record within continents and on the sea floor."

The other authors of the paper are R. Dietmar Mueller of the University of Sydney in Australia and Louis Moresi of the Commonwealth Scientific and Industrial Research Organization in Perth, Australia.

Robert Tindol

Caltech Question of the Month: Is it true that photosynthesis occurs in the ocean and supplies much of the oxygen we breathe on Earth?

Submitted by Hope McCallister, West Covina, California.

Answered by Jenn Fletcher, research fellow in biology, Caltech.

Photosynthesis occurs in the ocean in tiny, single-celled plants called phytoplankton. Phytoplankton lie at the bottom of the oceanic food chain. They are eaten by tiny marine animals such as krill, a shrimplike organism, which are in turn eaten by other sea animals. There are two main types of phytoplankton, diatoms and dinoflagellates, which lend a green, yellow, or even reddish tint to the water. There are literally billions of phytoplankton living in the oceans—one milliliter of seawater contains hundreds of thousands or even millions of phytoplankton. A single phytoplankton is so small that one hundred of them could be lined up end to end across a single blade of grass.

Although the average depth of the ocean is over 200 meters, almost all phytoplankton live in the top 10 meters, floating in the sunlit waters near the surface like a thin skin spread over the seas. Fueled by the energy of sunlight, phytoplankton use their chlorophyll to convert carbon dioxide and water into a simple, sugary food on which they survive. Like other plants, phytoplankton give off oxygen as a byproduct of photosynthesis. It has been estimated that phytoplankton drifting in the ocean currents indeed account for much of the world's oxygen.


Neuroscientists locate area of brain responsible for 3-D vision

PASADENA--Researchers have found the brain circuitry that allows us to see the world in three dimensions even when one eye is closed.

In the current issue of the journal Nature, a team of neuroscientists at the California Institute of Technology reports that the middle temporal area (MT) of the brain renders certain visual motion cues into 3-D perceptions. Area MT is a small cortical area in each cerebral hemisphere located approximately an inch or two above the ears in both humans and non-human primates.

"We see the world in three dimensions even though the image in our retina is flat and in two dimensions," says Richard Andersen, who is the James G. Boswell Professor of Neuroscience at Caltech and principal investigator of the study, which was also conducted with postdoctoral fellow David Bradley and graduate student Grace Chang.

"So to derive the third dimension of depth, our nervous system has to process certain visual motion cues."

Andersen says that many people may assume that 3-D vision is explained solely by our having two eyes that provide a stereoscopic view of the world. But stereoscopic vision is fairly new in evolution, he says, while the depth-from-motion process is much more fundamental and primitive.

"In certain contrived situations, you can actually be better off by closing one eye," he says. For example, in a video animation his team created for the research project, an image of a cylinder is constructed entirely with points of light on a black field. When the cylinder is frozen, the viewer normally sees only a rectangular flat plane of light dots. But when the image is rotated, the viewer perceives a three-dimensional object.

"In this case, your stereoscopic vision may tell you the image is flat," he explains. "But area MT still overrides what you see with two eyes and gives you depth."

What's actually happening in the brain at such a time is a processing of the motions the eye perceives on the screen. While the spinning cylinder appears to have three dimensions, it actually comprises a series of dots that move horizontally across the screen at varying speeds. The dots near the edge of the cylinder image move more slowly across the screen, while the dots at the center move more quickly.

The brain picks up these varying speeds as natural motions in the world. The right and left edges of a cylinder naturally seem to move more slowly than the portion of the cylinder directly in front because the edges are moving forward and backward in that reference frame. And while there are no stereoscopic views in this display, the brain can still reconstruct the perception of depth using only the motions of the dots.

An especially important aspect of the study is the fact that viewers have a bias as to which direction they perceive the image to rotate, which changes spontaneously every few seconds. Because the cylinder is merely a group of dots moving at varying speeds, the image can appear to be rotating either clockwise or counterclockwise (see QuickTime movies below). Both human viewers and rhesus monkeys tend to perceive the cylinder as moving left to right (or counterclockwise), and then with time see it reverse.

"The beauty of this illusion is that the stimulus is always the same, but at different instances in time the perception is completely different," Andersen says.

"If the MT neurons were only coding the direction of motion of the dots in two dimensions, the cells would not change, since the physical stimulus never actually changes," he adds.

"So what we're actually watching is brain activity changing when an interpretation of the three-dimensional world changes," Andersen says.

Andersen says the research is aimed primarily at a fundamental scientific understanding of the biology of perception. However, the research could also eventually impact the treatment of human patients with vision deficits. In the very far future, the research could also perhaps be exploited for a technology leading to artificial vision.


smaller QuickTime movie (2.3MB)


larger QuickTime movie (5.9MB)

Robert Tindol

Geologists find more evidence for an active fault beneath downtown and east Los Angeles

LONG BEACH--Geologists report new evidence for a fault beneath Los Angeles that could cause damaging earthquakes in the future.

According to Michael Oskin, a graduate student at the California Institute of Technology (Caltech), the new study concerns an 11-mile-long, previously known geologic fold that runs through the hills north and east of Downtown Los Angeles. This fold provides indirect evidence for an underlying fault.

"Our evidence from the surface is that the fold is still growing," says Oskin. "This indicates that the fault that lies beneath it must also be active."

The fold, first associated with earthquakes at the time of the Whittier Narrows Earthquake, in 1987, is formally known as the Elysian Park Anticlinorium and runs northwest-southeast from Hollywood to Whittier Narrows. Three smaller "wrinkles" formed upon the southwest-facing flank of this fold have been investigated in detail by the Caltech scientists.

Their studies of sediment deposited by the Los Angeles River and its tributaries indicate that these small folds have been active during the past 60,000 years. During that time, the area has been contracting north-south at a rate of at least a half-millimeter per year.

"Our evidence that this structure is active does not increase the overall hazard in the metropolitan region," says coauthor Kerry Sieh, a professor of geology at Caltech. "Rather, it allows us to be more specific about how, where, and how fast deformation is occurring in the area.

The length of the surface features suggests that the underlying fault is about 11 miles in length and may extend 10 or so miles into the earth. Such a fault, if it ruptured all at once, could produce a 6.5- to 6.8-magnitude earthquake.

The rate of deformation suggests that such events might occur, on average, about once every one to three thousand years.

Also contributing data and resources to the study were the Southern California Earthquake Center, Metropolitan Transit Authority (MTA), Engineering Management Consultant, and Earth Technology Corporation.

Oskin and Sieh reported on their work at the Geological Society of America meeting in Long Beach April 7, 1998.

Robert Tindol

Yucca Mountain Is Possibly More Seismically Active Than Once Believed, Geologists Discover

PASADENA—Recent geodetic measurements using Global Positioning System (GPS) satellites show that the Yucca Mountain area in southern Nevada is straining roughly 10 to 100 times faster than expected on the basis of the geologic history of the area. And for the moment at least, geologists are at a loss to explain the anomaly.

In the March 28 issue of the journal Science, Brian Wernicke of the California Institute of Technology (Caltech) and his colleagues at the [Smithsonian Astrophysical Observatory] in Cambridge, Massachusetts, report on Global Positioning System surveys they conducted from 1991 to 1997. Those surveys show that the Yucca Mountain area is stretching apart at about one millimeter per year east-southeastward.

"The question is, why are the predicted geological rates of stretching so much lower than what we are measuring?" asks Wernicke. "That's something we need to think through and understand."

The answer is likely to be of interest to quite a few people, because Yucca Mountain has been proposed as a site for the permanent disposal of high-level radioactive waste. Experts believe that the waste-disposal site can accommodate a certain amount of seismic activity, but they nonetheless would like for any site to have a certain amount of stability over the next 10,000 to 100,000 years.

Yucca Mountain was already known to have both seismic and volcanic activity, Wernicke says. An example of the former is the 5.4-magnitude "Little Skull Mountain" earthquake that occurred in 1992. And an example of the latter is the 80,000-year-old volcano to the south of the mountain. The volcano is inactive, but still must be studied according to Department of Energy regulations.

The problem the new study poses is that the strain is building up in the crust at a rate about one-fourth that of the most rapidly straining areas of the earth's crust, such as near the San Andreas fault, Wernicke says. But there could be other factors at work.

"There are three possibilities that we outline in the paper as to why the satellite data doesn't agree with the average predicted by the geological record," he says. "Either the average is wrong, or we are wrong, or there's some kind of pulse of activity going on and we just happened to take our data during the pulse."

The latter scenario, Wernicke believes, could turn out to be the case. But if Yucca Mountain is really as seismically active as the current data indicate at face value, the likelihood of magmatic and tectonic events could be 10 times higher than once believed.


Robert Tindol

Physicists create first nanometer-scale mechanical charge detector

PASADENA—Wristwatch cellular phones and space probes the size of baseballs would certainly have some eager customers, but both are still the stuff of science fiction.

Nonetheless, physicists are making strides these days in the sort of miniaturization that could someday make tiny electromechanical devices a reality. One such milestone, the first nanometer-scale mechanical charge detector, is reported in the current issue of Nature.

According to Michael Roukes, professor of physics at Caltech and coinventor of the device, the new electrometer is among the most sensitive charge detectors in existence, and definitely the first based upon nanomechanical principles.

"One compelling reason for doing this sort of thing is to explore entirely new avenues for making small, ultralow power electronic devices," says Roukes.

"Making new types of electronic devices that involve moving elements, which we call nanoelectromechanical systems, will open up a huge variety of new technological applications in areas such as telecommunications, computation, magnetic resonance imaging, and space exploration. And the physics is exciting, besides."

The device fabricated at Caltech by Roukes and his former postdoctoral collaborator, Andrew Cleland (now an assistant professor at UC Santa Barbara), is a good example of the type of advances in solid-state devices that currently are loosely gathered these days under the rubric "nanotechnology." Roukes says he generally avoids using the term. "Rather, this is the kind of science that is building the foundation for real nanotechnology, not the stuff of fiction. Right now Mother Nature is really the only true nanotechnologist."

A nanometer is one-billionth of a meter, which is about a hundred-thousandth the width of a human hair. A few atoms stacked side-by-side span about a nanometer.

To give an idea of the scale, Roukes points out that the devices are far smaller than cellular organisms; a clear picture of the device's inner workings can only be taken with an electron microscope.

The scale is especially noteworthy when one considers that the electrometer is actually a mechanical device, in the same manner as an old-fashioned clock. In other words, there are moving parts at its heart. In the Caltech devices, movement is induced by tiny wires that exert forces on the nanomechanical elements when a minute external electrical current is applied to them.

"The simplest kinds of mechanical structures are resonators, for example, cantilevers—in other words, a structure like a diving board—or thin clamped beams, something like a thick guitar string attached at both ends," Roukes explains. "They really are mechanical structures—you 'pluck' them to get them to vibrate."

"What's fascinating is that, if you can get these things small enough, they'll vibrate billions of times per second—which gives them the same frequency as the microwaves used in telecommunications," he says. "That's because their mass is very small, which means there's less inertia for internal forces to overcome.

There is a second important aspect to nanomechanical systems, Roukes adds. "Because the distances involved are very small, the amplitudes of their vibrations are very small. For this reason, the amount of energy you would have to put into such devices to get them going is extremely minute.

"This means that for certain critical applications—like small communicators and miniaturized satellites—you would not have to carry along nearly as much energy to run the device."

The latter would be fortuitous in any circumstances where carrying along power is difficult. Transistors in the best receiving devices today can run on a few thousandths of a watt, but with nanotechnology, they could run on a few billionths of a watt, or less. Thus, planetary space probes (which employ such devices in spades) could be much smaller, since they could get by with a much smaller energy source.

At the center of the Caltech nanoelectromechanical charge detection device are small rods that vibrate something like a nanoscale tuning fork. In their ultimate incarnation, which Roukes believes his lab can achieve in the next few years, these rods will be about 100 nanometers long, 10 nanometers wide, and 10 nanometers thick.

Roukes indicates that a silicon beam of such small dimensions would vibrate at about 7 gigahertz (or 7 billion times per second) if it is clamped down at both ends. When one considers that a top-of-the-line personal computer these days has a clock speed about twenty times slower, the advantages become apparent.

But it's not necessarily the replacement of conventional computer components that Roukes is after. It turns out that the small resonators his group is currently able to manufacture on campus—if cooled to temperatures a few tenths of a degree above absolute zero—sit right at the border where the quantum effects governing individual atoms and particles take over.

Working with these quantum effects is a daunting technological challenge, but success could lead to devices such as quantum computers.

"There is a natural dividing line that depends on the temperature and the frequency. Basically, if you can get the temperature low enough and the frequency is high enough, then you can operate at the quantum level.

"We could do this today," Roukes says. "In my laboratories we can get to temperatures a few thousandths of a degree above absolute zero. We also have the sizes small enough to give us sufficiently high frequencies.

"But what we don't yet know how to do is to probe these structures optimally."

In fact, one of the main themes of work in Roukes's group on nanoscale electromechanical devices is pretty much "how to talk to the devices and how to listen to them," he says. To measure a system is to probe it somehow, and to probe it is to interact with it.

The problem is that interacting with the system is, in essence, to alter its properties. In the worst case, which is easy to do, one could actually heat it sufficiently to raise its energy above the point at which it would cease functioning as a quantum-limited mechanical device.

"But there are lots of different physical processes on which we can base signal transducers. We are looking for the right approach that will allow us to listen to and hear from these devices at the scale of the quantum limit," he says.

"There's lots of interesting physics, and practical applications that we are learning about in the process."

As far as the device reported in Nature is concerned, Roukes says that the scales involved set a milestone—that of submicron mechanical structures—that is encouraging for scientists and technologists in the field. In addition to possibilities for telecommunications, techniques on which the experimental prototype is based should also lead to significant improvements in magnetic resonance detection.

These, in turn, could lead to imaging with a thousand times better resolution than that currently available.

Roukes's group, in close collaboration with P. Chris Hammel's group at Los Alamos National Laboratory, is already hard at work on these possibilities.

Robert Tindol

Mars Global Surveyor already bringing in scientific payoff

PASADENA—Despite a 12-month delay in aerobraking into a circular orbit, the Mars Global Surveyor is already returning a wealth of data about the atmosphere and surface of the Red Planet.

According to mission scientist Arden Albee of the California Institute of Technology, all scientific instruments on the Mars probe are fully functioning and providing good data. Early results from data collected during the 18 elliptical orbits in October and November are being reported in this week's issue of Science.

"For the first time, a spacecraft has captured the start of a major dust storm on Mars and has followed it through its development and demise," Albee says. "Also, we've received a number of narrow-angle high-resolution images that are enough to put any planetary geologist into a state of ecstasy."

These accomplishments are especially noteworthy when considering that the probe developed a glitch when it first began tightening up its orbit last September. For various reasons having to do with design and cost, Global Surveyor was set on a course that took it initially into a huge sweeping elliptical orbit of Mars.

On its near approach in each orbit, the probe was to dip into the upper atmosphere of Mars in a maneuver known as "aerobraking," which would effectively slow the probe down and eventually place it into a near-circular orbit.

But a solar–panel damper failed early in the mission, and damage to the solar panel forced the team to slow down the aerobraking. At the current rate of aerobraking, Mars Global Surveyor will enter its circular mapping orbit in March 1999.

This has delayed the systematic mapping of Mars, but Albee says that the new mission plan nonetheless permits the collection of science data in a 12-hour elliptical orbit, from March to September of this year.

"Another exciting discovery is that the Martian crust exhibits layering to much greater depths than would have been expected," Albee says. "Steep walls of canyons, valleys, and craters show the Martian crust to be stratified at scales of a few tens of meters.

"At this point, it is simply not known whether these layers represent piles of volcanic flows or sedimentary rocks that might have formed in a standing body of water," he adds.

The Mars Global Surveyor team has previously announced that the on-board magnetometer shows Mars to have a more complex magnetic field than once thought. Of particular interest is the fact that the magnetic field was apparently once about the same strength as that of present-day Earth.

Many experts think that a strong magnetic field may be crucial for the evolution of life on a planet. Without a magnetic field, a planet tends to have any existing atmospheric particles blasted away by cosmic rays in a process known as "sputtering."

And finally, the Mars Orbiting Laser Altimeter (MOLA) has already sent back 18 very good topographic profiles of the northern hemisphere. "Characterization of these features is leading to a new understanding of the surface processes of Mars," Albee says.

Robert Tindol

Researchers develop new plastic recording material that can be used to see through tissue without X rays

PASADENA--Researchers have recently achieved a certain amount of success in using laser light to see through scattering media such as human tissue. The new technology could eventually have medical applications in situations where X rays are ineffective or downright dangerous.

According to Seth Marder of the California Institute of Technology, his team and a group of collaborators from the University of Arizona have developed a photorefractive polymer that is highly sensitive to near infrared light.

Using this polymer, the group is now able to see through about half an inch of 2 percent milk. They have achieved this by custom-designing a polymer for a state-of-the-art laser holography setup.

Marder, a chemist at Caltech's Beckman Institute, says the work capitalizes on the fact that certain low-energy wavelengths can indeed penetrate solid substances to a certain extent. Visible light, radio and TV waves, infrared heat from an oven, and X rays are all manifestations of electromagnetic radiation that differ only in wavelength and energy.

"If you hold your hand up to a bright light source with your fingers closed tightly, you can actually see light coming through the skin itself," he explains. "In your body there are various things that absorb light such as amino acids, DNA, hemoglobin—which means that you can't see through them with ultraviolet or visible light.

"But many of these tissues stop absorbing light at a low enough energy," he says. "It turns out that, in the near infrared, your body is relatively transparent."

The goal, then, is to analyze this light that has penetrated tissue, and this is where Bernard Kippelen and Nasser Peyghambarian and their team at the Optical Sciences Center of the University of Arizona come in. Using extremely fast lasers, the Arizona team is able to look only at photons of light referred to as "ballistic photons," while filtering out scattered photons that arrive an instant later. The scattered photons lead to tremendous distortions of the image, rendering it useless. By filtering out the scattered photons (leaving only the ballistic photons) it is possible to recapture the original image.

The filtering technique, in fact, is holographic time gating and keys on the ability of a femtosecond laser (with light pulses of a millionth of a billionth of a second) to isolate the information from the ballistic photons.

First, a laser pulse is shot at the tissue to be imaged while a reference beam originating from the same laser source is also introduced into the optical system from another angle. The ballistic photons then interact with laser pulses from the reference beam and record a hologram in the photorefractive polymer developed by Caltech and the University of Arizona.

That hologram contains only the image from the ballistic photons. The delayed scattered photons do not form an interference pattern with the reference pulses because they arrive later and consequently do not write a hologram.

The ballistic photon hologram can then be reconstructed by sending a third laser beam from the opposite side (parallel to the original beam hitting the tissue). This beam is altered by the hologram recorded in the polymer. This altered probe beam then is isolated by the optical system to reconstruct a nearly undistorted image of the object.

The result is an image built of light, using infrared energy that has looked inside the object to a slight extent, but an image that has not required the use of high-energy radiation.

An application that Kippelen sees for the future is the imaging of human retinas. The onset of glaucoma can be detected quite early if certain structural changes just beneath the surface of the retina can be imaged.

Marder cautions that the techniques are still rudimentary, and even in their final form may not see through nearly as much tissue as X rays. But the object is not to replace X rays in all applications, he says—just certain ones in which the conditions are appropriate.

A recent technical report on the research can be found in the January 2 issue of the journal Science. In addition to Kippelen, Marder, and Peyghambarian, the other authors of the paper are Eric Hendrickx, Jose-Luis Maldonado, Gregoire Guillemet, Boris L. Volodin, Derek D. Steele, Yasufumi Enami, Sandalphon, Yonj-Jing Yao, Jiafu F. Wang, Harald Roeckel, and Lael Erskine.


Robert Tindol


Subscribe to RSS - research_news