Researchers make plasma jets in the labthat closely resemble astrophysical jets

Astrophysical jets are one of the truly exotic sights in the universe. They are usually associated with accretion disks, which are disks of matter spiraling into a central massive object such as a star or a black hole. The jets are very narrow and shoot out along the disk axis for huge distances at incredibly high speeds.

Jets and accretion disks have been observed to accompany widely varying types of astrophysical objects, ranging from proto-star systems to binary stars to galactic nuclei. While the mechanism for jet formation is the subject of much debate, many of the proposed theoretical models predict that jets form as the result of magnetic forces.

Now, a team of applied physicists at the California Institute of Technology have brought this seemingly remote phenomenon into the lab. By using technology originally developed for creating a magnetic fusion configuration called a spheromak, they have produced plasmas that incorporate the essential physics of astrophysical jets. (Plasmas are ionized gases and are excellent electrical conductors; everyday examples of plasmas are lightning, the northern lights, and the glowing gas in neon signs.)

Reporting in an upcoming issue of the Monthly Notices of the Royal Astronomical Society, Caltech professor of applied physics Paul Bellan and postdoctoral scholar Scott Hsu describe how their work helps explain the magnetic dynamics of these jets. By placing two concentric copper electrodes and a coaxial coil in a large vacuum vessel and driving huge electric currents through hydrogen plasma, these scientists have succeeded in producing jet-like structures that not only resemble those in astronomical images, but also develop remarkable helical instabilities that could help explain the "wiggled" structure observed in some astrophysical jets.

"Photographs clearly show that the jet-like structures in the experiment form spontaneously," says Bellan, who studies laboratory plasma physics but chanced upon the astrophysical application when he was looking at how plasmas with large internal currents can self-organize. "We originally built this experiment to study spheromak formation, but it also dawned on us that the combination of electrode structure, applied magnetic field, and applied voltage is similar to theoretical descriptions of accretion disks, and so might produce jet-like plasmas."

The theory Bellan refers to states that jets can be formed when magnetic fields are twisted up by the rotation of accretion disks. Magnetic field lines in plasma are like elastic bands frozen into jello. The electric currents flowing in the plasma (jello) can change the shape of the magnetic field lines (elastic bands) and thus change the shape of the plasma as well. Magnetic forces associated with these currents squeeze both the plasma and its embedded magnetic field into a narrow jet that shoots out along the axis of the disk.

By applying a voltage differential across the gap between the two concentric electrodes, Bellan and Hsu effectively simulate an accretion disk spinning in the presence of a magnetic field. The coil produces magnetic field lines linking the two concentric electrodes in a manner similar to the magnetic field linking the central object and the accretion disk.

In the experiment an electric current of about 100 kiloamperes is driven through the tenuous plasma, resulting in two-foot-long jet-like structures traveling at approximately 90 thousand miles per hour. More intense currents cause a jet to become unstable so that it deforms into a theoretically predicted helical shape known as a kink. Even greater currents cause the kinked jets to break off and form a spheromak. The jets last about 5 to 10 millionths of a second, and are photographed with a special high-speed camera.

"These things are very scalable, which is why we're arguing that the work applies to astrophysics," Bellan explains. "If you made the experiment the size of Pasadena, for example, the jets might last one second; or if it were the size of the earth, they would last about 10 minutes. But obviously, that's impractical."

The importance of the study, Bellan and Hsu say, is that it provides compelling evidence in support of the idea that astrophysical jets are formed by magnetic forces associated with rotating accretion disks, and it also provides quantitative information on the stability properties of these jets.

The work was supported by a grant from the U.S. Department of Energy.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Astronomers discover the strongest known magnet in the universe

Astrophysicists at the California Institute of Technology, using the Palomar 200-inch telescope, have uncovered evidence that a special type of pulsar has the strongest magnetic field in the universe.

Reporting in the May 30 issue of the journal Nature, Caltech graduate student Brian Kern and his advisor Chris Martin report on the nature of pulses emanating from a faint object in the constellation Cassiopeia. Using a specially designed camera and the Palomar 200-inch telescope, the team discovered that a quarter of the visible light from the pulsar known as 4U0142+61 is pulsed, while only 3 percent of the X rays emanating from the object are pulsed, meaning that the pulsar must be an object known as a magnetar.

"We were amazed to see how strongly the object pulsed in optical light compared with X rays," said Martin, who is a professor of physics at Caltech. "The light had to be coming from a strong, rotating magnetic field rather than a disk of infalling gas."

To explain the precise chain of reasoning that led the team to their conclusion, a certain amount of explanation of the nature of stars and pulsars is in order. Normal stars are powered by nuclear fusion in their hot cores. When a massive star exhausts its nuclear fuel, its core collapses, causing a titanic "supernova" explosion.

The collapsing core forms a "neutron star" which is as dense as an atomic nucleus and the size of Los Angeles. The very weak magnetism of the original star is greatly amplified (a billion- to a trillion-fold) during the collapse. The slow rotation of the original star grows as well, just as an ice skater spins much faster when her arms are drawn in.

The combination of a strong magnetic field and rapid spin often produces a "pulsar," an object that rotates its beam of light just like a lighthouse, but usually in the radio band of the electromagnetic spectrum. Pulsars have been discovered that rotate almost one thousand times every second. In conventional pulsars that have been studied since their discovery in the 1960s, the source of the energy that produces this pulsing light is the rotation itself.

In the last decade, a new type of pulsar has been discovered that is very different from the conventional radio pulsar. This type of object, dubbed an "anomalous X-ray pulsar," has a very lazy rotation (one every 6 to 12 seconds) and pulses in the X- ray frequencies but is invisible in radio waves. However, the X-ray power is hundreds of times the power provided by their slow rotation. Their source of energy is unknown, and therefore "anomalous." One of the brightest of these pulsars is 4U0142+61, named for its sky coordinates and detection by the Uhuru X-ray mission in the 1970s.

Two sources of energy for the X rays are possible. In the first model, bits of gas blown off in the supernova explosion fall back onto the resulting neutron star, whose magnetic field is no stronger than an ordinary pulsar's. As the gas slowly falls (accretes) onto the surface, it becomes hot and emits X rays.

A second model, proposed by Robert Duncan (University of Texas) and Christopher Thompson (Canadian Institute for Theoretical Astrophysics), holds that anomalous X-ray pulsars are magnetars, or neutron stars with ultra-strong magnetic fields. The magnetic field is so strong that it can power the neutron star by itself, generating X rays and optical light. Magnetic fields power solar flares in our own sun, but with only a tiny fraction of the power of nuclear fusion. Magnetars would be the only objects in the universe powered mainly by magnetism.

"Scientists would be thrilled to investigate these enormous magnetic fields, if they exist," says Kern. "Identifying 4U0142+61 as a magnetar is the essential first step in these studies."

The missing observational clue to distinguish between these very different power sources was provided by a novel camera designed to look at optical light coming from very faint pulsars. While most of the light appears in X-ray frequencies, anomalous X-ray pulsars emit a small amount of optical light. In pulsars powered by disks of gas, optical pulsations would be a diluted byproduct of X-ray pulsations, which are weak in this pulsar. A magnetar, on the other hand, would be expected to pulse as much or more in optical light as in X ray frequencies.

The problem is that the optical light from the object is extremely faint, about the brightness of a candle sitting on the moon. Astronomical cameras designed to look at very faint stars and galaxies must take very long exposures, as long as many hours, in order to detect the faint light, even with a 200-inch telescope. But in order to detect pulsations that repeat every eight seconds, the rotation period of 4U0142+61, exposure times must be very short, less than a second.

Martin and Kern invented a camera to solve this problem. The camera takes 10 separate pictures of the sky during a single rotation of the pulsar, each picture for less than one second. The camera then shuffles the pictures back to their starting point, and re-exposes the same 10 pictures for the next pulsar rotation. This exposure cycle is repeated hundreds of times before the camera data is recorded. The final image shows the pulsar at 10 different points in its repetitive cycle. During the cycle, part of the image is bright while part is dim. The large optical pulsations seen in 4U0142+61 show that it must be a magnetar.

How strong is the magnetic field of this magnetar? It is as much as a quadrillion times the strength of the earth's magnetic field, and ten billion times as strong as the strongest laboratory magnet ever made. A toy bar magnet placed near the pulsar would feel a force of a trillion pounds pulling its ends into alignment with the pulsar's magnetic poles.

A magnetar would be an unsafe place for humans to go. Because the pulsar acts as a colossal electromagnetic generator, a person in a spacecraft floating above the pulsar as it rotated would feel 100 trillion volts between his head and feet.

The magnetism is so strong that it has bizarre effects even on a perfect vacuum, polarizing the light traveling through it. Kern and Martin hope to measure this polarization with their camera in the near future in order to measure directly the effects of this ultra-strong magnetism, and to study the behavior of matter in extreme conditions that will never be reproduced in the laboratory.

Additional information available at http://www.astro.caltech.edu/palomar/

Writer: 
Robert Tindol
Writer: 

Astrophysicists announce surprising discoveryof extremely rare molecule in interstellar space

A rare type of ammonia that includes three atoms of deuterium has been found in a molecular cloud about 1,000 light-years from Earth. The comparative ease of detecting the molecules means there are more of them than previously thought.

In a study appearing in the May 20 issue of the Astrophysical Journal Letters, an international team of astronomers reports on the contents of a molecular cloud in the direction of the constellation Perseus. The observations were done with the Caltech Submillimeter Observatory atop Mauna Kea in Hawaii.

The molecule in question is called "triply deuterated ammonia," meaning that each molecule is composed of a nitrogen atom and three deuterium atoms (heavy hydrogen), rather than the usual single nitrogen atom and three hydrogen atoms found in the typical bottle of household ammonia. While not unknown on Earth, the molecules, until recently, were thought by experts to be quite rare—so rare, in fact, that the substance was considered too sparse to even be detectable from Earth.

But now that scientists have detected triply deuterated ammonia in the interstellar medium, they're still wondering why they were able to do so at all, says Tom Phillips, a physics professor at the California Institute of Technology, director of the Caltech Submillimeter Observatory, and leader of the Caltech team. No other molecules containing three deuterium atoms have ever been detected in interstellar space.

"From simple statistics alone, the chances for all three hydrogen atoms in an ammonia molecule to be replaced by the very rare deuterium atoms are one in a million billion," Phillips explains. "This is like buying a $1 state lottery ticket two weeks in a row and winning a $30 million jackpot both weeks. Astronomical odds indeed!"

As for the reasons the molecules would exist in the first place, says Dariusz Lis, a senior research associate in physics at Caltech and lead author of the paper, the frigid conditions of the dense interstellar medium allow the deuterium replacement of the hydrogen atoms to take place. At higher temperatures, there would be a back-and-forth exchange of the deuterium atoms between the ammonia molecules and the hydrogen molecules also present in the interstellar medium. But at the frosty 10-to-20 degrees above absolute zero that prevails in the clouds, the deuterium atoms prefer to settle into the ammonia molecules and stay there.

The study is important because it furthers the understanding of the chemistry of the cold, dense interstellar medium and the way molecules transfer from grains of dust to the gas phase, Phillips explains. The researchers think the triply deuterated ammonia was probably kicked off the dust grains by the energy of a young star forming nearby, thus returning to the gas state, where it could be detected by the Caltech Submillimeter Observatory.

The study was made possible because of the special capabilities of the Caltech Submillimeter Observatory, a 10.4-meter telescope constructed and operated by Caltech with funding from the National Science Foundation. The telescope is fitted with the world's most sensitive submillimeter detectors, making it ideal for seeking out the diffused gases and molecules crucial to understanding star formation.

In addition to the Caltech observers, the team also included international members from France led by Evelyne Roueff and Maryvonne Gerin from the Observatoire de Paris, funded by the French CNRS, and astronomers from the Max-Planck-Institut fuer Radioastronomie in Germany.

The main Web site for the Caltech Submillimeter Observatory is at http://www.submm.caltech.edu/cso.

 

 

Writer: 
Robert Tindol
Writer: 

Cosmic Background Imager uncoversfine details of early universe

Cosmologists from the California Institute of Technology using a special instrument high in the Chilean Andes have uncovered the finest detail seen so far in the cosmic microwave background radiation (CMB), which originates from the era just 300,000 years after the Big Bang. The new images, in essence, are photographs of the cosmos before stars and galaxies existed, and they reveal, for the first time, the seeds from which clusters of galaxies grew.

The observations were made with the Cosmic Background Imager (CBI), which was designed especially to make fine-detailed high-precision pictures in order to measure the geometry of space-time and other fundamental cosmological quantities.

The cosmic microwave background (CMB) originated about 300,000 years after the Big Bang and it provides a crucial experimental laboratory for cosmologists to understand the origin and eventual fate of the universe because at that remote epoch matter had not yet formed galaxies and stars. Tiny density fluctuations at that time grew under the influence of gravity to produce all the structures we see in the universe today, from clusters of galaxies down to galaxies, stars and planets. These density fluctuations give rise to temperature fluctuations which are seen in the microwave background.

First predicted soon after World War II and first detected in 1965, the CMB arose when matter got cool enough for the electrons and protons to combine to form atoms, at which point the universe became transparent. Before this time the universe was an opaque fog because light couldn't travel very far before hitting an electron.

The CBI results released today provide independent confirmation that the universe is "flat." Also, the data yield a good measurement of the amount of the mysterious non-baryonic "dark matter" —which differs from the stuff everyday objects are made of—in the universe. The results also confirm that "dark energy" plays an important role in the evolution of the universe.

According to Anthony Readhead, the Rawn Professor of Astronomy at Caltech and principal investigator on the CBI project, "These unique high-resolution observations give a powerful confirmation of the standard cosmological model. Moreover this is the first direct detection of the seeds of clusters of galaxies in the early universe."

The flat universe and the existence of "dark energy" lend additional empirical credence to the theory of "inflation," which states that the universe grew from a tiny subatomic region during a period of violent expansion a split second after the Big Bang —a popular theory to account for troubling details about the Big Bang and its aftermath.

Because it sees finer details in the CMB sky, the CBI goes beyond the recent successes of the BOOMERANG and MAXIMA balloon-borne experiments and the DASI experiment at the South Pole.

The previous findings relied on a simple model which the higher resolution CBI observations have verified. If the interpretation were incorrect, it would require nature to be doubly mischievous to be giving the same wrong answers from observations on both large and small angular scales.

"We have been fortunate at CITA to work closely with Caltech as members of both the CBI and BOOMERANG teams to help analyze the cosmological implications of these exquisite high precision experiments," says Richard Bond, director of the Canadian Institute for Theoretical Astrophysics, " It is hard to imagine a more satisfying marriage of theory and experiment."

Given the radical nature of the results coming from cosmological observations, it is crucial that all aspects of cosmological theory be thoroughly tested. The fact that the CBI observations compared with others are at very different resolution, and that the various observations are made with widely differing techniques, at different frequencies, and cover different parts of the sky, and yet agree so well, gives great confidence to the findings.

The CBI hardware was designed primarily by Steven Padin, chief scientist on the project, while the software was designed and implemented by senior research associate Timothy Pearson and staff scientist Martin Shepherd. Postdoctoral Scholar Brian Mason and three graduate students, John Cartwright, Jonathan Sievers, and Patricia Udomprasert all played critical roles in the project.

The photons we see today with instruments like the CBI, the earlier COBE satellite, and the BOOMERANG, MAXIMA, and DASI experiments, have been traveling through the universe since first emitted from matter about 14 billion years ago.

The temperature differences observed in the CMB are so slight, only about one part in 100,000, that it has taken 37 years to get images with details as fine as these presented today. Though first detected with a ground-based antenna in 1965, the cosmic microwave background appeared to be quite smooth to earlier experimentalists due to the limitation of the instruments available to them. It was the COBE satellite in the early 1990s that first demonstrated slight variations in the cosmic microwave background. The celebrated COBE images were of the entire sky, but the details were many times larger than any known structures in the present universe.

The CBI and the DASI instrument of the University of Chicago, which is operating at the South Pole, are sister projects that share much commonality of design, both making interferometry measurements of extremely high precision.

The BOOMERANG experiment, led by Caltech's Goldberger Professor of Physics Andrew Lange, demonstrated the flatness of the universe two years ago. The BOOMERANG observations, together with observations from the MAXIMA and DASI experiments, not only indicated the geometry of the universe, but also bolstered the inflation theory via accurate measurements of many of the fundamental cosmological parameters. The combination of these previous results with those announced today covers a range of angular scales from about one-tenth of a moon diameter to about one hundred moon diameters, and this gives great confidence in the combined results.

The CBI is a microwave telescope array comprising 13 separate antennas, each about three feet in diameter, set up in concert so that the entire machine acts as an interferometer. The detector is located at Llano de Chajnantor, a high plateau in Chile at 16,700 feet, making it by far the most sophisticated scientific instrument ever used at such high altitudes. The telescope is so high, in fact, that members of the scientific team must each carry bottled oxygen to do the work.

In five separate papers submitted today to the Astrophysical Journal, Readhead and his colleagues at Caltech, together with collaborators from the Canadian Institute for Theoretical Astrophysics, the National Radio Astronomy Observatory, the University of Chicago, the Universidad de Chile, the University of Alberta, the University of California at Berkeley, and the Marshall Space Flight Center, report on observations of the cosmic microwave background they have obtained since the CBI began operation in January 2000. The images obtained cover three patches of sky, each about 70 times the size of the moon, but showing fine details down to only one percent the size of the moon.

The next step for Readhead and his CBI team is to look for polarization in the photons of the cosmic microwave background. This will be a two-pronged attack involving both the CBI and DASI instruments and teams in complementary observations, which will enable them to tie down the value of these fundamental parameters with significantly higher precision. Funds for the upgrade of the CBI to polarization capability have been generously provided by the Kavli Institute.

The CBI is supported by the National Science Foundation, the California Institute of Technology, and the Canadian Institute for Advanced Research, and has also received generous support from Maxine and Ronald Linde, Cecil and Sally Drinkward, Stanley and Barbara Rawn, Jr., and the Kavli Institute.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Gamma-ray bursts are caused by explosive death of massive stars, new study reveals

In two papers appearing in an upcoming issue of the Astrophysical Journal, an international team of astrophysicists led by Shri Kulkarni of the California Institute of Technology reveals that new data show that supernovae are the source of gamma-ray bursts.

The new information was obtained from a gamma-ray burst that was detected in November and studied by the Hubble Space Telescope, the Australia Telescope Compact Array, the Anglo-Australian Telescope, and optical telescopes in Chile.

For the last few years astronomers have been chasing clues linking the mysterious gamma-ray bursts to their favored suspect: massive stars. Previous observations hinted at debris from an exploding star, but the observations were inconclusive.

Careful observations of gamma-ray burst GRB 011121 have uncovered remnants of the exploded star, whose signature was buried in the bright, fading embers. Now, for the first time, two compelling tell-tale signatures of the massive star were observed.

As explained by Kulkarni, who is the McArthur Professor of Astronomy and Planetary Sciences at Caltech and the head of the international team that made this discovery, "With these observations we have tied this gamma-ray burst to an exploding star. I am absolutely delighted that nature provided us with such a clean answer."

At the core of the observations, the data show that a supernova accompanied the burst. Supernovae are a natural consequence of exploding stars and difficult to make by other means. Joshua Bloom, Caltech graduate student and lead author on the supernova paper, said. "It is not often that a graduate student gets the chance to make a major discovery. I am very fortunate to be involved in this one."

The astronomers were also able to deduce that the explosion took place in a cocoon of gas fed by a "wind" of matter emanating from the progenitor star. Paul Price, graduate student at the Australian National University and a lead author on the second paper, was "intensely excited. Once it became clear that we had not only seen the supernova but also the cocoon I was very happy; I couldn't sleep for days."

The gamma-ray burst in question was detected on November 21, 2001, by the Italian-Dutch Satellite BeppoSAX in the southern-sky constellation of Chamaeleon. The position was quickly refined by a network of satellites. Astronomers from Poland and Chile, as well as another U.S. team from Harvard, used optical telescopes in Chile to rapidly identify the ``afterglow,'' or glowing embers, of the gamma-ray burst and determined that the galaxy in which the burst was located was quite near - a paltry five billion light-years from Earth.

The sensitive optical and infrared observations were in part possible because of the relatively small distance to the burst. Given the proximity, the Caltech team decided to dedicate a large portion of their allocated Hubble Space Telescope time toward observing any possible supernova component. Kulkarni says of the decision, "We simply went for broke because of the potential payoff."

Kulkarni believes that this is just the beginning of a new era in our understanding of the death of massive stars. The stars die by collapsing, and the collapse both fuels the explosion and leaves a stellar residue of neutron stars and black holes. Indeed, theorists have long speculated that gamma-ray bursts are the birth cry of spinning black holes. New facilities such as the Chandra X-Ray Observatory, and future facilities such as gravitational-wave observatories and neutrino telescopes, will allow astronomers to investigate the dramatic collapse process.

Kulkarni cautions, however, that all is still not known about gamma-ray bursts. It may be that other exotic phenomena, such as two colliding neutron stars, or a neutron star colliding with a black hole, produce some of the events that we see. "Despite extensive efforts, until now we have not seen clear signatures for a cocoon in dozens of other gamma-ray bursts, and there have been only hints of a supernova in a few other bursts," Bloom says.

Price adds, "It means there will be lots more to do in the future. I have a secure thesis now!"

In addition to Kulkarni, Bloom, and Price, members of the team reporting the results are Caltech professors S. George Djorgovski and Fiona Harrison and postdoctoral fellows and scholars Daniel Reichart, Derek Fox, Titus Galama, and Re'em Sari. Edo Berger and Sara Yost are Caltech graduate students also on the team, as are Dale Frail from the National Radio Astronomy Observatory, and many other international collaborators. Separately, P. M. Garnavich of the University of Notre Dame and his collaborators have reached similar conclusions with data taken from the Magellan telescope in Chile.

 

Writer: 
Robert Tindol
Writer: 

JPL, Caltech, Smithsonian scientists improve methodology for monitoring HOx radicals

PASADENA, Calif.—Scientists have unraveled a mystery about hydrogen peroxide that may lead to a more accurate way of measuring a gas that contributes to depletion of Earth's ozone layer.

Scientists have long known that the HOx radicals -- comprising hydroxyl (OH) and hydroperoxyl (HO2) – destroy stratospheric ozone. Too little ozone may lead to unwelcome changes in the climate and to more ultraviolet radiation reaching Earth's surface. The HOx radicals cannot be easily measured in the atmosphere, but a product of their reaction with each other, hydrogen peroxide, is detectable.

The issue is important because atmospheric scientists would like to make global maps of HOx distributions to better understand the health of the atmosphere, and knowing how much peroxide is in the atmosphere is helpful in doing so.

However, there has always been a large, nagging discrepancy between the distribution of hydrogen peroxide as it is modeled and as it is observed. This suggests that complete understanding of the chemistry has been lacking. Now scientists from NASA's Jet Propulsion Laboratory (JPL), the California Institute of Technology, and the Harvard-Smithsonian Center for Astrophysics have resolved much of the disparity.

In an upcoming issue of the journal Geophysical Research Letters, the scientists report a collaborative laboratory study performed at JPL and funded by NASA that revealed an error in the calculation of the rate that hydrogen peroxide is formed. They showed that improved knowledge of the reaction mechanism largely reconciles measurements of HOx radicals and hydrogen peroxide in the upper atmosphere. Moreover, these results could ultimately allow HOx concentrations to be inferred by monitoring hydrogen peroxide from space or the ground, assuming all the other photochemical reactions involving peroxide are well characterized.

"The importance is not so much the hydrogen peroxide itself, but the fact that it opens the possibility for remotely measuring hydrogen peroxide to infer the HOx radicals," says Mitchio Okumura, an associate professor of chemistry at Caltech and one of the authors of the study.

"The HOx radicals are central to the chemistry of the stratosphere and upper troposphere in understanding ozone depletion," he adds. Atmospheric chemists had puzzled over why models could not correctly predict hydrogen peroxide concentrations. However, they had not suspected that the rate for forming hydrogen peroxide from two hydroperoxyl radicals (HO2), the calculation of which was thought to be well known, could be in error.

Lance Christensen, a Caltech graduate student in chemistry working at JPL, and lead author of the paper, showed that at low temperatures relevant to the stratosphere, the actual reaction rate is slower than had been previously measured. The researchers found that earlier studies had neglected taking into account competing processes that could obscure the results, such as clustering and aggregation of the cold reactants.

"We're trying to improve our understanding of the atmosphere well enough to be able to model ozone depletion and climate change in general," says JPL researcher Stan Sander, one of the authors of the paper. "This work provides a tool for better understanding what's going on in the climate."

In addition to Okumura, Sander, Christensen, and Salawitch, the other authors are Geoffrey Toon, Bhaswar Sen, and Jean-Francois Blavier, all of JPL; and K.W. Jucks of the Harvard-Smithsonian Center for Astrophysics.

Writer: 
Robert Tindol
Writer: 

New Clues to the Processing of Memories

PASADENA, Calif.- Quick! Memorize this sentence: The temporoammonic (TA) pathway is a entorhinal cortex (EC) input that consists of axons from layer III EC neurons that make synaptic contacts on the distal dendrites of CA1 neurons.

If by chance you can't memorize this, say two researchers from the California Institute of Technology, it may be due to this very TA pathway that is modulating what your brain remembers.

In another clue toward understanding the processing of memories, graduate student Miguel Remondes and Erin M. Schuman, an associate professor of biology at Caltech and an assistant investigator of the Howard Hughes Medical Institute, have now gleaned two possible roles for the TA pathway that until now were not known. The research is reported in the April 18 issue of the journal Nature.

Using rat hippocampal slices, they've found that this pathway may be part of the brain's decision-making process about whether to keep a particular input and form a memory, or reject it.

Input from the senses—an odor, a sight, or a sound, say, is first received by the brain's cortex. Then, via a specific pathway of nerve fibers long known to scientists, the signals are sent on to the hippocampus. That organ processes the signals, then sends them back to the cortex, probably for long-term storage.

Scientists have also known about the TA pathway, but not its function. Now Remondes and Schuman report that the TA pathway may serve as a memory gatekeeper that can either enhance or diminish the signals of the specific set of neurons that form a memory. Further, they've shown that this pathway may also provide the hippocampus with the information it needs to form so-called place-selective cells; that is, cells that help animals to know where they are in their environments.

The hippocampal formation comprises several structures in the brain and includes the seahorse-shaped hippocampus and a second organ called the dentate gyrus. The formation is involved in saving and retrieving long-term memories. Scientists divide the hippocampus into four divisions, from CA1 to CA4. CA1 and CA3 play major roles in processing memory.

In their quest to understand how communication between neurons contributes to memory, scientists have focused on the "trisynaptic circuit." When input from the senses reaches the cortex, it's sent on to the dentate gyrus, then on to the hippocampus. There, the signals are serially processed by synapses in areas CA3 and CA1 of the hippocampus (synapses are gaps between two neurons that function as the site of information transfer from one neuron to another). Finally, the hippocampus sends a signal back to the cortex. That's the trisynaptic circuit.

Remondes and Schuman found that the TA pathway also sends signals. But its input comes from a different part of the cortex and goes directly to the CA1 section of the hippocampus. The TA pathway reacts depending on how close in time the synaptic signals from the hippocampus are from the original signal sent by the trisynaptic circuit. If it is close, within 40 milliseconds, the TA pathway will act as a signal (and thus a memory) enhancer; that is, it will allocate a stronger synaptic signal from the hippocampus. If it is far, more than 400 milliseconds, it will inhibit the signal.

"So the brain sends the information to the hippocampus," says Remondes, "and instead of just collecting the result of its activity, the hippocampus may very well perform 'quality control' on the potential memory. And it may be doing this by using the direct cortical input from the TA pathway." Perhaps, then, this is a further clue to how memories are stored—or forgotten.

In addition, although the scientists have not done any specific spatial memory experiments, their work may have relevance to how the brain forms place-selective cells. Since other studies previously established that the trisynaptic circuit is not necessary for spatial memory, some of the important information entering the hippocampus may actually be provided by the TA pathway.

"The TA pathway has been briefly described in the past, but not really acknowledged as a 'player' in the memory debate," says Remondes. "Hopefully, these findings will bring new insight into how we form, or don't form, memories."

Writer: 
MW
Exclude from News Hub: 
No

Researchers find new clue why Martian wateris found on the north pole, not the south

When astronauts finally land on Mars, a safe bet is that they'll head for northern climes if they intend to spend much time there. That's because nearly all the available water is frozen as ice at the north pole. Planetary scientists have been aware of this for some time, but they now have a new clue why it is so.

In the March 21 issue of the journal Nature, California Institute of Technology researcher Mark Richardson and his colleague John Wilson of the National Oceanic and Atmospheric Administration reveal that the higher average elevation of the Red Planet's southern hemisphere ultimately tends to drive water northward.

Their evidence is based on a computer model the two have worked on for years (Wilson since 1992, Richardson since 1996), coupled with data returned by NASA's Mars Global Surveyor.

"We've found a mechanism in the Martian climate that introduces annual average hemispheric asymmetry," explains Richardson, an assistant professor of planetary science at Caltech. "The circulation systems of Mars and Earth are similar in certain ways, but Mars is different in that water is not available everywhere."

The key to understanding the phenomenon is a complicated computer modeling of the Hadley circulation, which extends about 40 degrees of latitude each side of the Martian equator. A topographical bias in circulation pretty much means there will be a bias in the net pole-to-pole transport of water, Richardson explains.

A plausible explanation is that water ice is found at the north pole and carbon dioxide ice is found at the south for reasons having to do with the way the sun heats the atmosphere. As the Martian orbit changes on time scales of 50,000 years and more, these effects tend to cancel, with no pole claiming the water ice cap over geological time. It has been suggested that topography determines where carbon dioxide forms, and hence, where water ice can form, but the processes controlling carbon dioxide ice caps are poorly understood.

However, the mechanism Richardson and Wilson describe is independent of this occasional realignment of the pole's precession and the planet's eccentric orbit. The mechanism means that, while there is never a time in the past when water ice can be discounted at the south pole, one is more likely to find it more frequently at the north pole.

The importance of the study is its furthering of our understanding of the Martian climate and Martian water cycle. A better understanding of how water is transported will be particularly important to determining whether life once existed on Mars, and what happened to it if it ever did.

The Web address for the journal Nature is http://www.nature.com.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Caltech and Purdue scientists determinestructure of the Dengue fever virus

Scientists at the California Institute of Technology and Purdue University have determined the fine-detail structure of the virus that causes dengue fever. This advance could lead to newer and more focused strategies for devising a vaccine to protect the world against a viral illness that causes 20,000 deaths each year.

Reporting in the March 8 issue of the journal Cell, Caltech biology professor James H. Strauss, lead author Richard J. Kuhn of Purdue (a former postdoctoral scholar in Strauss's lab), and Michael G. Rossman and Timothy S. Baker, both of Purdue, describe the structure of the virus they obtained with a cryoelectron microscope. The detailed electron-density map shows the inner RNA core of the virus as well as the other spherical layers that cover it. At the surface is the glycoprotein scaffolding thought to allow the virus to interact with the receptor and invade a host cell.

This is the first time the structure of one of the flaviviruses has been described, Strauss says. The flaviviruses are a class of viruses that include the yellow fever, West Nile, tick-borne encephalitis, and Japanese encephalitis viruses. All are enclosed with a glycoprotein outer layer that includes minor projections out of the lipid layer due to the geometry of the scaffolding.

"Most viruses that cause serious illness are enveloped, including influenza, hantaviruses, West Nile virus, smallpox, and herpes—though not polio," Strauss says.

The surprise for the researchers was the unusual manner in which the glycoproteins are arranged. Details from the Caltech and Purdue computer-generated images show a highly variegated structure of glycoprotein molecules that are evenly dispersed, but with a surprisingly complicated pattern.

"It's symmetrical, but not with the obvious symmetry of most symmetric viruses," Strauss explains. "This was not an expected result."

Strauss says it's still unclear what the odd symmetry will ultimately mean for future research aimed at controlling the disease, because the precise function of the different structural domains of the glycoproteins are still not known. Those that have been false-colored blue in the image are the domain of glycoproteins thought to be involved in receptor binding—and thus responsible for the virus's entry into a cell. The glycoprotein structures coded yellow are an elongated domain thought to be responsible for holding the scaffolding together; and the ones coded red have a function that is not yet known.

But a more detailed view of these structures is the beginning of a more informed strategy for a focused medical or pharmaceutical attack, Strauss says. "You can think of the protease inhibitors for HIV. Those in large part came from knowing the structure of the HIV enzymes you were trying to interfere with."

Thus, the new work could lead to drugs that will bind to the virus to prevent it from entering the cell, or perhaps from reassembling once it is already inside the cell.

Dengue fever is a mosquito-spread disease that has been known for centuries, but was first isolated in the 1940s after it became a significant health concern for American forces in the Pacific theater. A worldwide problem, the disease is found throughout Latin America, the Caribbeans, Southeast Asia, and India, and is currently at epidemic levels in Hawaii.

Especially virulent is the closely related dengue hemorrhagic fever, which is responsible for most of the deaths. The disease is a leading cause of infant mortality in Thailand, where there is an especially vigorous program to find an effective vaccine.

More information can be found on the Center for Disease Control Web site at http://www.cdc.gov/ncidod/dvbid/dengue/index.htm.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Caltech astronomer to search for "hot Jupiters"with off-the-shelf camera lens

In an age when nearly all astronomical work requires really big telescopes, David Charbonneau is something of an anomaly. The Caltech astronomer will soon begin a multiyear survey for extrasolar planets at Palomar Observatory—not with the 200-inch Hale telescope, but with a tiny desktop-sized device he and JPL researcher John Trauger assembled largely from parts bought at a camera shop.

Basing his instrument on a standard 300-millimeter telephoto lens for a 35-millimeter camera, Charbonneau will begin sweeping the skies this spring in hopes of catching a slew of "hot Jupiters" as their fast orbits take them in front of other stars. Admittedly, the charge-coupled device at the camera-end of the lens is a good bit more costly than the lens itself, but the total budget for the project—$100,000—is still a paltry sum when one considers that the next generation of earthbound telescopes will likely cost upwards of $400 million apiece.

Charbonneau, a recent import to the Caltech astronomy staff from the Harvard-Smithsonian Center for Astrophysics, is one of the world's leading authorities on the search for "transiting planets," or planets that should be detectable as they pass into the line of sight between their host star and Earth. In November, Charbonneau and his colleagues made international news when they discovered the first planetary atmosphere outside our own solar system. But that work was done with the Hubble Space Telescope. The yet-to-be-formally-named telescope at Palomar Observatory will certainly be more modest in cost, but every bit as ambitious a program for searching out other worlds.

"Basically, the philosophy of this project is that, if we can buy the stuff we need off the shelf, we'll buy it," the Canadian native said recently in his new campus office.

At the fore-end of the new instrument is a standard 300-millimeter camera lens. Charbonneau settled on a telephoto lens because he reasoned that the optics have been honed to a fine degree of precision over the years. Too, he assumed that the lens would be robust enough for the duration of the three-year project.

The charge-coupled device (CCD), a standard imaging tool in astronomy for the last couple of decades, is a $22,000 item that accounts for the largest part of the instrument budget. The CCD will be mounted in a specially constructed camera housing to fit at the back of the lens, and the entire device will be fitted onto an inexpensive equatorial mount—also available at many stores carrying amateur astronomical equipment.

Meanwhile, the Palomar staff has stored away a 20-inch telescope so Charbonneau will have a small dome for his new instrument, and are also doing other preparations to mechanize the actual observing so that a telescope operator will not have to be on site at all times.

Palomar Observatory engineer Hal Petrie says the mountain crew is currently busy linking the new telescope with an existing weather-monitoring system at the nearby 48-inch dome, where another automated telescope is located. The system monitors the atmospheric conditions to determine whether the dome should be opened.

"The new telescope is a very good use of space," says Petrie. "The potential for results is very exciting."

Charbonneau will be able to photograph a single square of sky, about 5 degrees by 5 degrees. That's a field of view in which about 100 full moons could fit. Or, if one prefers, a field of view in which an entire constellation can be seen at one time.

With special software Charbonneau helped develop during his time at Harvard-Smithsonian and at the National Center for Atmospheric Research, he will compare many pictures of the same patch of sky to see if any of the thousands of stars in each field have slightly changed. If the software turns up a star that has dimmed slightly, the reason could well be that a planet passed in front of the star between exposures.

Repeated measurements will allow Charbonneau to measure the orbital period and physical size of each planet, and further work with the 10-meter telescopes at the Keck Observatory will allow him and his colleagues to get spectrographic data, and thus, the mass and composition of each planet.

"Once you get the mass and size, you have the density," he says. Weather permitting, Charbonneau will be able to get up to 300 images during an ideal night. Assuming that he can have 20 good nights per month, he should have about 6,000 images each month show up in his computer.

The ideal time will be in the fall and winter, when the Milky Way is in view, and an extremely high number of stars can be squeezed into each photograph. This, too, is an anomaly in astrophysical research, particularly to cosmologists, for whom the Milky Way is pretty much a blocked view of the deep sky.

"It's estimated that about one in three stars in our field of view will be like the sun, and that one percent of sunlike stars will have a hot Jupiter, or a gas giant that is so close to the star that its orbit is about four or five days," he says.

"One-tenth of this 1 percent will be inclined in the right direction so that the planet will pass in front of its star, so that maybe one in 3,000 stars will have a planet we can detect," Charbonneau adds. "Or if you want to be conservative, about one in 6,000."

Compared to other research programs in astronomy, the search for hot Jupiters is fairly simple and straightforward to explain to the public, Charbonneau says.

"An amateur could do this, except maybe for the debugging of the software, which requires several people working 10 hours a day.

"But it's easy to understand what's going on, and cheap to build the equipment. That's why everyone thinks it's an ideal project—if it works."

The new Palomar telescope is the final instrument in a network of three. Of the other two, one is located in the Canary Islands and operated by the National Center for Atmospheric Research; the other is near Flagstaff, Arizona, and is operated by Lowell Observatory.

The large span in longitude of the three-instrument network will allow Charbonneau and his colleagues to observe a patch of sky with one telescope while the patch is above the horizon in the night sky, and then pass it off to the next westward telescope as the sun comes up.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Pages

Subscribe to RSS - research_news