New Sky Survey Begins at Palomar Observatory

Photos available at http://www.astro.caltech.edu/palomar/oschin_telescope.htm

PALOMAR Mountain, Calif. — A major new sky survey has begun at the Palomar Observatory. The Palomar-QUEST survey, a collaborative venture between the California Institute of Technology, Yale University, the Jet Propulsion Laboratory, and Indiana University, will explore the universe from our solar system out to the most distant quasars, more than 10 billion light-years away.

The survey will be done using the newly refurbished 48-inch Oschin Telescope, originally used to produce major photographic sky atlases starting in 1950s. At its new technological heart is a very special, fully digital camera. The camera contains 112 digital imaging detectors, known as charge-coupled devices (CCDs). The largest astronomical camera until now has had 30 CCDs. CCDs are often used for digital imaging ranging from common snapshot cameras to sophisticated scientific instruments. Designed and built by scientists at Yale and Indiana Universities, the QUEST (Quasar Equatorial Survey Team) camera was recently installed on the Oschin Telescope. "We are excited by the new data we are starting to obtain from the Palomar Observatory with the new QUEST camera," says Charles Baltay, Higgins Professor of Physics and Astronomy at Yale University. Baltay's dream of building a large electronic camera that could capture the entire field of view of a wide-field telescope is now a reality. The survey will generate astronomical data at an unprecedented rate, about one terabyte per month; a terabyte is a million megabytes, an amount of information approximately equivalent to that contained in two million books. In two years, the survey will generate an amount of information about equal to that in the entire Library of Congress.

A major new feature of the Palomar-QUEST survey will be many repeated observations of the same portions of the sky, enabling researchers to find not only objects that move (like asteroids or comets), but also objects that vary in brightness, such as the supernova explosions, variable stars, quasars, or cosmic gamma-ray bursts--and to do this at an unprecedented scale.

"Previous sky surveys provided essentially digital snapshots of the sky", says S. George Djorgovski, professor of astronomy at Caltech. "Now we are starting to make digital movies of the universe." Djorgovski and his team, in collaboration with the Yale group, are also planning to use the survey to discover large numbers of very distant quasars--highly luminous objects believed to be powered by massive black holes in the centers of young galaxies--and to use them to probe the early stages of the universe.

Richard Ellis, Steele Professor of Astronomy and director of the Caltech Optical Observatories, will use QUEST in the search for exploding stars, known as supernovae. He and his team, in conjunction with the group from Yale, will use their observations of these exploding stars in an attempt to confirm or deny the recent finding that our universe is accelerating as it expands.

Shri Kulkarni, MacArthur Professor of Astronomy and Planetary Science at Caltech, studies gamma-ray bursts, the most energetic stellar explosions in the cosmos. They are short lived and unpredictable. When a gamma-ray burst is detected its exact location in the sky is uncertain. The automated Oschin Telescope, armed with the QUEST camera's wide field of view, is poised and ready to pin down the exact location of these explosions, allowing astronomers to catch and study the fading glows of the gamma-ray bursts as they occur.

Closer to home, Caltech associate professor of planetary astronomy Mike Brown is looking for objects at the edge of our solar system, in the icy swarm known as the Kuiper Belt. Brown is convinced that there big objects out there, possibly as big as the planet Mars. He, in collaboration with astronomer David Rabinowitz of Yale, will use QUEST to look for them.

Steve Pravdo, project manager for the Jet Propulsion Laboratory's Near-Earth Asteroid Tracking (NEAT) Project, will use QUEST to continue the NEAT search which began in 2001. The QUEST camera will extend the search for asteroids that might one day approach or even collide with our planet.

The Palomar-QUEST survey will undoubtedly enable many other kinds of scientific investigations in the years to come. The intent is to make all of the copious amounts of data publicly available in due time on the Web, as a part of the nascent National Virtual Observatory. Roy Williams, member of the professional staff of Caltech's Center for Advanced Computing Research, is working on the National Virtual Observatory project, which will greatly increase the scientific impact of the data and ease its use for public and educational outreach as well.

The QUEST team members from Indiana University are Jim Musser, Stu Mufson, Kent Honeycutt, Mark Gebhard, and Brice Adams. Yale University's team includes Charles Baltay, David Rabinowitz, Jeff Snyder, Nick Morgan, Nan Ellman, William Emmet, and Thomas Hurteau. The members from the California Institute of Technology are S. George Djorgovski, Richard Ellis, Ashish Mahabal, and Roy Williams. The Near-Earth Asteroid Tracking team from the Jet Propulsion Laboratory consists of Raymond Bambery, principal investigator, and coinvestigators Michael Hicks, Kenneth Lawrence, Daniel MacDonald, and Steven Pravdo.

Installation of the QUEST camera at the Palomar Observatory was overseen by Robert Brucato, Robert Thicksten, and Hal Petrie.

Related Link: Palomar Observatory http://www.astro.caltech.edu/palomar

MEDIA CONTACT: Scott Kardel, Palomar Public Affairs Director (760) 742-2111 wsk@astro.caltech.edu

Visit the Caltech media relations web site: http://pr.caltech.edu/media

Writer: 
SK

A Detailed Map of Dark Matter in a Galactic Cluster Reveals How Giant Cosmic Structures Formed

Astrophysicists have had an exceedingly difficult time charting the mysterious stuff called dark matter that permeates the universe because it's--well--dark. Now, a unique "mass map" of a cluster of galaxies shows in unprecedented detail how dark matter is distributed with respect to the shining galaxies. The new comparison gives a convincing indication of how dark matter figures into the grand scheme of the cosmos.

Using a technique based on Einstein's theory of general relativity, an international group of astronomers led by Jean-Paul Kneib, Richard Ellis, and Tommaso Treu of the California Institute of Technology mapped the mass distribution of a gigantic cluster of galaxies about 4.5 billion light-years from Earth. They did this by studying the way the cluster bends the light from other galaxies behind it. This technique, known as gravitational lensing, allowed the researchers to infer the mass contribution of the dark matter, even though it is otherwise invisible.

Clusters of galaxies are the largest stable systems in the universe and ideal "laboratories" for studying the relationship between the distributions of dark and visible matter. Caltech's Fritz Zwicky realized in 1937 from studies of the motions of galaxies in the nearby Coma cluster that the visible component of a cluster--the stars in galaxies--represents only a tiny fraction of the total mass. About 80 to 85 percent of the matter is invisible.

In a campaign of over 120 hours of observations using the Hubble Space Telescope, the researchers surveyed a patch of sky almost as large as the full moon, which contained the cluster and thousands of more distant galaxies behind it. The distorted shapes of these distant systems were used to map the dark matter in the foreground cluster. The study achieved a new level of precision, not only for the center of the cluster, as has been done before for many systems, but also for the previously uncharted outlying regions.

The result is the most comprehensive study to date of the distribution of dark matter and its relationship to the shining galaxies. Signals were traced as far out as 15 million light-years from the cluster center, a much larger range than in previous investigations.

Many researchers have tried to perform these types of measurements with ground-based telescopes, but the technique relies heavily on measuring the exact shapes of distant galaxies behind the cluster, and for this the "surgeon's eye" of the Hubble Space Telescope is far superior.

The study, to be published soon in the Astrophysical Journal, reveals that the density of dark matter falls fairly sharply with distance from the cluster center, defining a limit to its distribution and hence the total mass of the cluster. The falloff in density with radius confirms a picture that has emerged from detailed computer simulations over the past years.

Team member Richard Ellis said, "Although theorists have predicted the distribution of dark matter in clusters from numerical simulations based on the effects of gravity, this is the first time we have convincing observations on large scales to back them up.

"Some astronomers had speculated clusters might contain large reservoirs of dark matter in their outermost regions," Ellis added. "Assuming our cluster is representative, this is not the case."

In finer detail, the team noticed that some structure emerged from their map of the dark matter. For example they found localized concentrations of dark matter associated with galaxies known to be slowly falling into the system. Overall there is a striking correspondence between features in the dark matter map and that delineated by the cluster galaxies, which is an important result in the new study.

"The close association of dark matter with structure in the galaxy distribution is convincing evidence that clusters like the one studied built up from the merging of smaller groups of galaxies, which were prevented from flying away by the gravitational pull of their dark matter," says Jean-Paul Kneib, who is the lead author in the publication.

Future investigations will extend this work using Hubble's new camera, the Advanced Camera for Surveys (ACS), which will be trained on a second cluster later this year. ACS is 10 times more efficient than the Wide Field and Planetary Camera 2, which was used for this investigation. With the new instrument, it will be possible to study clumps of finer mass in galaxy clusters in order to investigate how the clusters originally were assembled.

By tracing the distribution of dark matter in the most massive structure in the universe using the powerful trick of gravitational lensing, astronomers are making great progress towards a better understanding of how such systems were assembled, as well as toward defining the key role of dark matter.

In addition to Kneib, Ellis, and Treu, the other team members are Patrick Hudelot of the Observatoire Midi-Pyrénées in France, Graham P. Smith of Caltech, Phil Marshall of the Mullard Radio Observatory in England, Oliver Czoske of the Institut für Astrophysik und Extraterrestrische Forschung in Germany, Ian Smail of the University of Durham in England, and Priya Natarajan of Yale University.

For more information, please contact:

Jean-Paul Kneib Caltech/Observatoire Midi-Pyrénées (currently in Hawaii) Phone: (808) 881-3865 E-mail: jean-paul.kneib@ast.obs-mip.fr

Richard Ellis Caltech Phone: (626) 395-4970 (secretary) (Australia: Cellular: 011-44-7768-923277) E-mail: rse@astro.caltech.edu

Writer: 
RT

International Teams Set New Long-range Speed Record with Next-generation Internet Protocol

Scientists at the California Institute of Technology (Caltech) and the European Organization for Nuclear Research (CERN) have set a new Internet2 land speed record using the next-generation Internet protocol IPv6. The team sustained a single stream TCP rate of 983 megabits per second for more than one hour between the CERN facility in Geneva and Chicago, a distance of more than 7,000 kilometers. This is equivalent to transferring a full CD in 5.6 seconds.

The performance is remarkable because it overcomes two important challenges:

· IPv6 forwarding at Gigabit-per-second speeds · High-speed TCP performance across high bandwidth/latency networks.

This major step towards demonstrating how effectively IPv6 can be used should encourage scientists and engineers in many sectors of society to deploy the next-generation Internet protocol, the Caltech researchers say.

This latest record by Caltech and CERN is a further step in an ongoing research-and-development program to develop high-speed global networks as the foundation of next generation data-intensive grids. Caltech and CERN also hold the current Internet2 land speed record in the IPv4 class, where IPv4 is the traditional Internet protocol that carries 90 percent of the world's network traffic today. In collaboration with the Stanford Linear Accelerator Center (SLAC), Los Alamos National Laboratory, and the companies Cisco Systems, Level 3, and Intel, the team transferred one terabyte of data across 10,037 kilometers in less than one hour, from Sunnyvale, California, to Geneva, Switzerland. This corresponds to a sustained TCP rate of 2.38 gigabits per second for more than one hour.

Multi-gigabit-per-second IPv4 and IPv6 end-to-end network performance will lead to new research and business models. People will be able to form "virtual organizations" of planetary scale, sharing in a flexible way their collective computing and data resources. In particular, this is vital for projects on the frontiers of science and engineering, projects such as particle physics, astronomy, bioinformatics, global climate modeling, and seismology.

Harvey Newman, professor of physics at Caltech, said, "This is a major milestone towards our dynamic vision of globally distributed analysis in data-intensive, next-generation high-energy physics (HEP) experiments. Terabyte-scale data transfers on demand, by hundreds of small groups and thousands of scientists and students spread around the world, is a basic element of this vision; one that our recent records show is realistic. IPv6, with its increased address space and security features is vital for the future of global networks, and especially for organizations such as ours, where scientists from all world regions are building computing clusters on an increasing scale, and where we use computers including wireless laptop and mobile devices in all aspects of our daily work.

"In the future, the use of IPv6 will allow us to avoid network address translations (NAT) that tend to impede the use of video-advanced technologies for real-time collaboration," Newman added. "These developments also will empower the broader research community to use peer-to-peer and other advanced grid architectures in support of their computationally intensive scientific goals."

Olivier Martin, head of external networking at CERN and manager of the DataTAG project said, "These new records clearly demonstrate the maturity of IPv6 protocols and the availability of suitable off-the-shelf commercial products. They also establish the feasibility of transferring very large amounts of data using a single TCP/IP stream rather than multiple streams as has been customarily done until now by most researchers as a quick fix to TCP/IP's congestion avoidance algorithms. I am optimistic that the various research groups working on this issue will now quickly release new TCP/IP stacks having much better resilience to packet losses on long-distance multi-gigabit-per-second paths, thus allowing similar or even better records to be established across shared Internet backbones."

The team used the optical networking capabilities of the LHCnet, DataTAG, and StarLight and gratefully acknowledges support from the DataTAG project sponsored by the European Commission (EU Grant IST-2001-32459), the DOE Office of Science, High Energy and Nuclear Physics Division (DOE Grants DE-FG03-92-ER40701 and DE-FC02-01ER25459), and the National Science Foundation (Grants ANI 9730202, ANI-0230967, and PHY-0122557).

About the California Institute of Technology (Caltech):

With an outstanding faculty, including four Nobel laureates, and such off-campus facilities as Palomar Observatory, and the W. M. Keck Observatory, the California Institute of Technology is one of the world's major research centers. The Institute also conducts instruction in science and engineering for a student body of approximately 900 undergraduates and 1,000 graduate students who maintain a high level of scholarship and intellectual achievement. Caltech's 124-acre campus is situated in Pasadena, California, a city of 135,000 at the foot of the San Gabriel Mountains, approximately 30 miles inland from the Pacific Ocean and 10 miles northeast of the Los Angeles Civic Center. Caltech is an independent, privately supported university, and is not affiliated with either the University of California system or the California State Polytechnic universities. More information is available at http://www.caltech.edu.

About CERN:

CERN, the European Organization for Nuclear Research, has its headquarters in Geneva, Switzerland. At present, its member states are Austria, Belgium, Bulgaria, Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland, and the United Kingdom. Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission, and UNESCO have observer status. For more information, see http://www.cern.ch.

About the European Union DataTAG project:

The DataTAG is a project co-funded by the European Union, the U.S. Department of Energy, and the National Science Foundation. It is led by CERN together with four other partners. The project brings together the following European leading research agencies: Italy's Istituto Nazionale di Fisica Nucleare (INFN), France's Institut National de Recherche en Informatique et en Automatique (INRIA), the UK's Particle Physics and Astronomy Research Council (PPARC), and Holland's University of Amsterdam (UvA). The DataTAG project is very closely associated with the European Union DataGrid project, the largest grid project in Europe also led by CERN. For more information, see http://www.datatag.org.

 

Writer: 
Robert Tindol
Writer: 

Hydrogen economy might impactEarth's stratosphere, study shows

According to conventional wisdom, hydrogen-fueled cars are environmentally friendly because they emit only water vapor -- a naturally abundant atmospheric gas. But leakage of the hydrogen gas that can fuel such cars could cause problems for the upper atmosphere, new research shows.

In an article appearing this week in the journal Science, researchers from the California Institute of Technology report that the leaked hydrogen gas that would inevitably result from a hydrogen economy, if it accumulates, could indirectly cause as much as a 10-percent decrease in atmospheric ozone. The researchers are physics research scientist Tracey Tromp, Assistant Professor of Geochemistry John Eiler, planetary science professor Yuk Yung, planetary science research scientist Run-Lie Shia, and Jet Propulsion Laboratory scientist Mark Allen.

If hydrogen were to replace fossil fuel entirely, the researchers estimate that 60 to 120 trillion grams of hydrogen would be released each year into the atmosphere, assuming a 10-to-20-percent loss rate due to leakage. This is four to eight times as much hydrogen as is currently released into the atmosphere by human activity, and would result in doubling or tripling of inputs to the atmosphere from all sources, natural or human.

Because molecular hydrogen freely moves up and mixes with stratospheric air, the result would be the creation of additional water at high altitudes and, consequently, an increased dampening of the stratosphere. This in turn would result in cooling of the lower stratosphere and disturbance of ozone chemistry, which depends on a chain of chemical reactions involving hydrochloric acid and chlorine nitrate on water ice.

The estimates of potential damage to stratospheric ozone levels are based on an atmospheric modeling program that tests the various scenarios that might result, depending on how much hydrogen ends up in the stratosphere from all sources, both natural and anthropogenic.

Ideally, a hydrogen fuel-cell vehicle has no environmental impact. Energy is produced by combining hydrogen with oxygen pulled from the atmosphere, and the tailpipe emission is water. The hydrogen fuel could come from a number of sources (Iceland recently started pulling it out of the ground). Nuclear power could be used to generate the electricity needed to split water, and in principle, the electricity needed could also be derived from renewable sources such as solar of wind power.

By comparison, the internal combustion engine uses fossil fuels and produces many pollutants, including soot, noxious nitrogen and sulfur gases, and the "greenhouse gas" carbon dioxide. While a hydrogen fuel-cell economy would almost certainly improve urban air quality, it has the potential unexpected consequences due to the inevitable leakage of hydrogen from cars, hydrogen production facilities, the transportation of the fuel.

Uncertainty remains about the effects on the atmosphere because scientists still have a limited understanding of the hydrogen cycle. At present, it seems likely such emissions could accumulate in the air. Such a build-up would have several consequences, chief of which would be a moistening and cooling of the upper atmosphere and, indirectly, destruction of ozone.

In this respect, hydrogen would be similar to the chlorofluorocarbons (once the standard substance used for air conditioning and refrigeration), which were intended to be contained within their devices, but which in practice leaked into the atmosphere and attacked the stratospheric ozone layer.

The authors of the Science article say that the current situation is unique in that society has the opportunity to understand the potential environmental impact well ahead of the growth of a hydrogen economy. This contrasts with the cases of atmospheric carbon dioxide, methyl bromide, CFCs, and lead, all of which were released into the environment by humans long before their consequences were understood.

"We have an unprecedented opportunity this time to understand what we're getting into before we even switch to the new technology," says Tromp, the lead author. "It won't be like the case with the internal-combustion engine, when we started learning the effects of carbon dioxide decades later."

The question of whether or not hydrogen is bad for the environment hinges on whether the planet has the ability to consume excess anthropogenic hydrogen, explains Eiler. "This man-made hydrogen will either be absorbed in the soil -- a process that is still poorly understood but likely free of environmental consequences -- or react with other compounds in the atmosphere.

"The balance of these two processes will be key to the outcome," says Eiler. "If soils dominate, a hydrogen economy might have little effect on the environment. But if the atmosphere is the big player, the stratospheric cooling and destruction of ozone modeled in this Science paper are more likely to occur.

"Determining which of these two processes dominates should be a solvable problem," states Eiler, whose research group is currently exploring the natural budget of hydrogen using new isotopic techniques.

"Understanding the effects of hydrogen on the environment now should help direct the technologies that will be the basis of a hydrogen economy," Tromp adds. "If hydrogen emissions present an environmental hazard, then recognizing that hazard now can help guide investments in technologies to favor designs that minimize leakage.

"On the other hand, if hydrogen is shown to be environmentally friendly in every respect, then designers could pursue the most cost-effective technologies and potentially save billions in needless safeguards."

"Either way, it's good for society that we have an emission scenario at this stage," says Eiler. "In past cases -- with chlorofluorocarbons, nitrogen oxides, methane, methyl bromide, carbon dioxide, and carbon monoxide -- we always found out that there were problems long after they were in common use. But this time, we have a unique opportunity to study the anthropogenic implications of a new technology before it's even a problem."

If hydrogen indeed turns out to be bad for the ozone layer, should the transition to hydrogen-fueled cars be abandoned? Not necessarily, Tromp and Eiler claim.

"If it's the best way to provide a new energy source for our needs, then we can, and probably should, do it," Tromp says.

Eiler adds, "If we had had perfect foreknowledge of the effects of carbon dioxide a hundred years ago, would we have abandoned the internal combustion engine? Probably not. But we might have begun the process of controlling CO2 emissions earlier."

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Astronomers "weigh" pulsar's planets

For the first time, the planets orbiting a pulsar have been "weighed" by measuring precisely variations in the time it takes them to complete an orbit, according to a team of astronomers from the California Institute of Technology and Pennsylvania State University.

Reporting at the summer meeting of the American Astronomical Society, Caltech postdoctoral researcher Maciej Konacki and Penn State astronomy professor Alex Wolszczan announced today that masses of two of the three known planets orbiting a rapidly spinning pulsar 1,500 light-years away in the constellation Virgo have been successfully measured. The planets are 4.3 and 3.0 times the mass of Earth, with an error of 5 percent.

The two measured planets are nearly in the same orbital plane. If the third planet is co-planar with the other two, it is about twice the mass of the moon. These results provide compelling evidence that the planets must have evolved from a disk of matter surrounding the pulsar, in a manner similar to that envisioned for planets around sun-like stars, the researchers say.

The three pulsar planets, with their orbits spaced in an almost exact proportion to the spacings between Mercury, Venus, and Earth, comprise a planetary system that is astonishingly similar in appearance to the inner solar system. They are clearly the precursors to any Earth-like planets that might be discovered around nearby sun-like stars by the future space interferometers such as the Space Interferometry Mission or the Terrestrial Planet Finder.

"Surprisingly, the planetary system around the pulsar 1257+12 resembles our own solar system more than any extrasolar planetary system discovered around a sun-like star," Konacki said. "This suggests that planet formation is more universal than anticipated."

The first planets orbiting a star other than the sun were discovered by Wolszczan and Frail around an old, rapidly spinning neutron star, PSR B1257+12, during a large search for pulsars conducted in 1990 with the giant, 305-meter Arecibo radio telescope. Neutron stars are often observable as radio pulsars, because they reveal themselves as sources of highly periodic, pulse-like bursts of radio emission. They are extremely compact and dense leftovers from supernova explosions that mark the deaths of massive, normal stars.

The exquisite precision of millisecond pulsars offers a unique opportunity to search for planets and even large asteroids orbiting the pulsar. This "pulsar timing" approach is analogous to the well-known Doppler effect so successfully used by optical astronomers to identify planets around nearby stars. Essentially, the orbiting object induces reflex motion to the pulsar which result in perturbing the arrival times of the pulses. However, just like the Doppler method, the pulsar timing method is sensitive to stellar motions along the line-of-sight, the pulsar timing can only detect pulse arrival time variations caused by a pulsar wobble along the same line. The consequence of this limitation is that one can only measure a projection of the planetary motion onto the line-of-sight and cannot determine the true size of the orbit.

Soon after the discovery of the planets around PSR 1257+12, astronomers realized that the heavier two must interact gravitationally in a measurable way, because of a near 3:2 commensurability of their 66.5- and 98.2-day orbital periods. As the magnitude and the exact pattern of perturbations resulting from this near-resonance condition depend on a mutual orientation of planetary orbits and on planet masses, one can, in principle, extract this information from precise timing observations.

Wolszczan showed the feasibility of this approach in 1994 by demonstrating the presence of the predicted perturbation effect in the timing of the planet pulsar. In fact, it was the first observation of such an effect beyond the solar system, in which resonances between planets and planetary satellites are commonly observed. In recent years, astronomers have also detected examples of gravitational interactions between giant planets around normal stars.

Konacki and Wolszczan applied the resonance-interaction technique to the microsecond-precision timing observations of PSR B1257+12 made between 1990 and 2003 with the giant Arecibo radio telescope. In a paper to appear in the Astrophysical Journal Letters, they demonstrate that the planetary perturbation signature detectable in the timing data is large enough to obtain surprisingly accurate estimates of the masses of the two planets orbiting the pulsar.

The measurements accomplished by Konacki and Wolszczan remove a possibility that the pulsar planets are much more massive, which would be the case if their orbits were oriented more "face-on" with respect to the sky. In fact, these results represent the first unambiguous identification of Earth-sized planets created from a protoplanetary disk beyond the solar system.

Wolszczan said, "This finding and the striking similarity of the appearance of the pulsar system to the inner solar system provide an important guideline for planning the future searches for Earth-like planets around nearby stars."

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Why Fearful Animals Flee—or Freeze

PASADENA, Calif. –In most old-fashioned black-and-white horror flicks, it always seems there's some hapless hero or heroine who gets caught up in a life-threatening situation. Instead of making the obvious choice--to run like hell--he/she freezes in place. That decision, alas, leads to their ultimate demise.

While their fate was determined by bad scriptwriting, scientists already know that in real life, environment and experience influence defensive behaviors. Less understood are the neural circuits that determine such decisions. Now, in an article in the May 1 issue of the Journal of Neuroscience, researchers at the California Institute of Technology have developed an experimental model using mice that can map and manipulate the neural circuits involved in such innate behaviors as fear.

Raymond Mongeau, Gabriel A. Miller, Elizabeth Chiang, and David J. Anderson, in work performed at Caltech, manipulated either a flight or freeze reaction in mice through the use of an ultrasonic auditory stimulus, and further, were able to alter the mouse's behavior by making simple changes in the animal's environment. They also found that flight and freezing are negatively correlated, suggesting that a kind of competition exists between these alternative defensive motor responses. Finally, they have begun to map the potential circuitry in the brain that controls this competition.

"Fear and anxiety are important emotions, especially in this day and age," says Anderson, a Caltech professor of biology and an investigator with the Howard Hughes Medical Institute. "We know a lot about how the brain processes fear that is learned, but much less is known about innate or unlearned fear. Our results open the way to better understanding how the brain processes innately fearful stimuli, and how and where anxiety affects the brain to influence behavior."

Using the ultrasonic cue, the researchers were able to predict and manipulate the animal's reaction to a fearful situation. They found that mice exposed to the ultrasonic stimulus in their home cage (a familiar environment) predominantly displayed a flight response. Those placed in a new cage (an unfamiliar environment), or treated with foot shocks the previous day, primarily displayed freezing and less flight.

Anderson noted that in previous fear "conditioning" experiments, where mice learn to fear a neutral tone associated with a footshock, the animals show only freezing behavior and never flight, even though in the wild, flight is a normal and important fear response to predators. This suggests that the ultrasonic stimulus used by Anderson and colleagues is tapping into brain circuits that mediate natural, or innate, fear responses that include flight as well as freezing.

What causes the shift from flight to freezing behavior? Probably high anxiety and stress, say the authors, caused by an unfamiliar environment or the foot shocks. The researchers suggest that freezing requires a higher threshold level of anticipatory fear (the heroine inside a dark, spooky house) before it can be elicited by the ultrasound.

Most brain researchers believe the brain uses a hierarchy of neural systems to determine which defensive behaviors, like flight or freezing, to use. These range from an evolutionary older neural system that generates "quick and dirty" defensive strategies, to more evolved systems that produce slower but more sophisticated reactions. These systems are known to interact, but the neural mechanisms that decide which response wins out are not understood.

One of the goals of their work was to map the brain regions that control the behaviors triggered by the fear stimulus, to observe whether any change in brain activity correlated with the different defensive behaviors. They achieved this, all the way down to the resolution of a single neuron, by mapping the expression pattern of the c-FOS gene, a so-called "immediate early gene" that is turned on when neurons are excited. The switching on of the c-FOS gene can therefore be used as an indication of neuronal activation.

A map of the c-FOS expression patterns during flight vs. freezing revealed that mice displaying freezing behavior had neural activity in different regions of the brain than those that fled. Some of these regions were previously known to inhibit each other, providing a possible explanation for the apparent competition between flight and freezing observed in the intact animal.

Anderson notes that more work needs to be done to pin down where and how anxiety modifies defensive behavior. "This system may also provide a useful model for understanding the neural substrates of human fear disorders, like panic and anxiety," says Anderson, "as well as provide a model for developing drugs to treat them."

Contact: Mark Wheeler (626) 395-8733 wheel@caltech.edu

Visit the Caltech Media Relations Website at http://pr.caltech.edu/media

###

Writer: 
MW
Writer: 

Caltech biology professor to directresearch program on brain signaling

California Institute of Technology biologist Mary Kennedy has been named project director for a $4 million federal project grant to better understand how the brain processes signals. Progress could lead to new insights into how drugs can be better custom-designed to treat a host of neurodegenerative disorders, mental illnesses, and disabilities, including Alzheimer's disease, depression, and schizophrenia.

The funding will come from the National Institute of Neurological Disorders and Stroke, a component of the National Institutes of Health (NIH). According to Kennedy, who is the Allen and Lenabelle Davis Professor of Biology at Caltech, the five-year project is innovative because it will integrate advanced computational methods with experiments to better analyze and model calcium signaling in the brain. In addition to Kennedy's research group at Caltech, the program will involve research teams from the Salk Institute, Cold Spring Harbor Laboratory, and the University of North Carolina.

"Another aspect of this research that is quite new is the application of these kinds of methods at the molecular level," she says. "This is important because, for about 20 years or so, it wasn't really possible to be rigorously quantitative about the biochemical functions of synapses at the molecular level. This was because we didn't know all the molecules that were involved."

With new advances, especially the completion of the Human Genome Project, it is now time for a new phase in research on the molecular mechanisms of brain functions, according to Kennedy. In addition to basic improvements in knowledge of how brain signaling works, the research program could also lead indirectly to pharmaceutical advances.

"Neurological and mental diseases result, in part, from derangements in regulation of synaptic transmission," Kennedy says. "In a type of neuronal structure known as dendritic spines -- so named because they sort of look like spines -- calcium influx through a certain type of receptor is a principal regulator of synaptic strength, or plasticity. Thus, calcium can lead to increases or decreases, of varying durations, in synaptic strength."

The program includes four projects and a core that will provide new computer software. One project will use a computer program called MCell to develop and test models of calcium dynamics in spines. Another will rely on microscopy to study the organization of calcium sources and sinks in spines, as well as calcium distribution. A third, which will be centered in Kennedy's lab, will develop and test kinetic models of enzymes regulated by calcium; and a fourth will use advanced imaging techniques to measure calcium signals and their regulation in individual spines.

The program will be highly interdisciplinary, Kennedy says. Three physicists will be among the team members in her lab. Work at the other institutions, as well, will involve specialists from disciplines outside biology.

"Once we have a better quantitative understanding of signaling, it will be possible to ask much 'cleaner' questions about what kind of drugs will treat certain conditions, and under what circumstances."

###

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

New Insight Into How Flies Fly

PASADENA, Calif. –How does a fly fly and why should we care? To the first, says Michael Dickinson, a professor of bioengineering at the California Institute of Technology, the short answer is different from what we have thought, and he and his colleagues used a dynamically-scaled flapping robot (aka Robofly), a free flight arena (aka Fly-O-Rama), and a 3D, infrared visual flight simulator (Fly-O-Vision) to prove it.

And we should care, says Dickinson, because the simple motion of a flying fly links a series of fundamental and complex processes within both the physical and biological sciences. Studying a fly may eventually lead to a model that will provide insight into the behavior and robustness of complex systems in general, and, for roboticists, may help them in the design of flying robots that mimic nature.

In a paper entitled "The Aerodynamics of Free Flight Maneuvers in Drosophila," Steven Fry of the University of Zurich, Rosalyn Sayaman, a Caltech research assistant, and Dickinson show how tiny insects use their wings to generate enough torque to overcome inertia, and not--as conventional wisdom has held--friction. The paper will appear in the April 18 issue of the journal Science.

Flies and other dipterans (insects within the family that includes houseflies, hoverflies, and fruit flies), are capable of making rapid 90-degree turns, called saccades, at "extraordinary" speeds, says Dickinson, less than 50-thousandths of a second. That's faster, he says, "than a human eye can blink." To make the turn, a fly must generate enough torque, or twisting force, to offset two forces working against it--the inertia of its own body and the viscous friction of air.

Until now, it's always been assumed that viscosity, a resistance to flow, is the enemy for small critters, while inertia is the bane of larger animals like birds. But the theory has never been tested.

To study the aerodynamics of active flight maneuvers, the researchers employed infrared, three-dimensional, high-speed video (the Fly-O-Vision) to capture the fruit fly, Drosophila melanogaster, performing saccades in free flight. The animals were released in a large, enclosed arena (the Fly-O-Rama), and lured toward a vertical cylinder laced with a drop of vinegar. As the flies approach the cylinder, it looms within their field of view, triggering a rapid turn that helps the fly avoid a collision.

Many flies performed saccades within the intersecting fields of view of the three cameras, which allowed the researchers to film the turn, measure the wing and body position throughout the maneuver, and calculate the velocity of its path.

The improved resolution of the 3D video showed that, despite its small size and slow speed (relative to other animals), the fly performed a banked turn, similar to those observed in larger fly species, first accelerating, then slowing as it changed heading, then accelerating again at the end of the turn. This suggests that the time and velocity of the small fly are dominated by body inertia and not friction.

To see if the measured patterns of wing motion were sufficient to explain the saccades, the researchers played the sequences through a dynamically scaled robotic model (you guessed it, Robofly) to measure the aerodynamic forces as they vary by time. They found that the time and torque they calculated based on the fly's body morphology and body motion from the video matched "amazingly well," says Dickinson, with the calculations derived from the wing motion of the robot. These results, he notes, further support the notion that even in small insects the torques created by the wings act primarily to overcome inertia and not friction.

Although these experiments were performed on tiny fruit flies, says Dickinson, the results impact nearly all insects, because the importance of inertia over friction increases with the size of the animal. The results also provide a basis for future research on the neural and mechanical basis of insect flight, and, for roboticists, may offer insights for the design of biomimetic flying devices. It may also yield a little respect for the common fly. As Rosalyn Sayaman puts it on her web page, "I now love flies. I used to just shoo and swat. Now, I can't even swat anymore."

Note to Editors: Video and still photos are available.

Contact: Mark Wheeler (626) 395-8733 wheel@caltech.edu

Visit the Caltech Media Relations Website at http://pr.caltech.edu/media

###

Writer: 
MW
Writer: 

Astronomers find new evidence aboutuniverse's heaviest phase of star formation

New distance measurements from faraway galaxies further strengthen the view that the strongest burst of star formation in the universe occurred about two billion years after the Big Bang.

Reporting in the April 17 issue of the journal Nature, California Institute of Technology astronomers Scott Chapman and Andrew Blain, along with their United Kingdom colleagues Ian Smail and Rob Ivison, provide the redshifts of 10 extremely distant galaxies which strongly suggest that the most luminous galaxies ever detected were produced over a rather short period of time. Astronomers have long known that certain galaxies can be seen about a billion years after the Big Bang, but a relatively recent discovery of a type of extremely luminous galaxy -- one that is very faint in visible light, but much brighter at longer wavelengths -- is the key to the new results.

This type of galaxy was first found in 1997 using a new and much more sensitive camera for observing at submillimeter wavelengths (longer than the wavelengths of visible light that allows us to see, but somewhat shorter than radio waves). The camera was attached to the James Clerk Maxwell Telescope (JCMT), on Mauna Kea in Hawaii.

Submillimeter radiation is produced by warm galactic "dust" -- micron-sized solid particles similar to diesel soot that are interspersed between the stars in galaxies. Based on their unusual spectra, experts have thought it possible that these "submillimeter galaxies" could be found even closer in time to the Big Bang.

Because the JCMT cannot see details of the sky that are as fine as details seen by telescopes operating at visible and radio wavelengths, and because the submillimeter galaxies are very faint, researchers have had a hard time determining the precise locations of the submillimeter galaxies and measuring their distances. Without an accurate distance, it is difficult to tell how much energy such galaxies produce; and with no idea of how powerful they are, it is uncertain how important such galaxies are in the universe.

The new results combine the work of several instruments, including the Very Large Array in New Mexico (the world's most sensitive radio telescope), and one of the 10-meter telescopes at the W. M. Keck Observatory on Mauna Kea, which are the world's largest optical telescopes. These instruments first pinpointed the position of the submillimeter galaxies, and then measured their distances. Today's article in Nature reports the first 10 distances obtained.

The Keck telescope found the faint spectral signature of radiation that is emitted, at a single ultraviolet wavelength of 0.1215 micrometers, by hydrogen gas excited by either a large number of hot, young stars or by the energy released as matter spirals into a black hole at the core of a galaxy. The radiation is detected at a longer, redder wavelength, having been Doppler shifted by the rapid expansion of the universe while the light has been traveling to Earth.

All 10 of the submillimeter galaxies that were detected emitted the light that we see today when the universe was less than half its present age. The most distant produced its light only two billion years after the Big Bang (12 billion years ago). Thus, the submillimeter galaxies are now confirmed to be the most luminous type of galaxies in the universe, several hundred times more luminous than our Milky Way, and 10 trillion times more luminous than the sun.

It is likely that the formation of such extreme objects had to wait for a certain size of a galaxy to grow from an initially almost uniform universe and to become enriched with carbon, silicon, and oxygen from the first stars. The time when the submillimeter galaxies shone brightly can also provide information about how the sizes and makeup of galaxies developed at earlier times.

By detecting these galaxies, the Caltech astronomers have provided an accurate census of the most extreme galaxies in the universe at the peak of their activity and witnessed the most dramatic period of star buildup yet seen in the Milky Way and nearby galaxies. Now that their distances are known accurately, other measurements can be made to investigate the details of their power source, and to find out what galaxies will result when their intense bursts of activity come to an end.

James Clerk Maxwell Telescope is at http://www.jach.hawaii.edu/JACpublic/JCMT The Very Large Array is at http://www.aoc.nrao.edu/vla/html. Keck Observatory is at http:/www.astro.caltech.edu/mirror/keck/index.html

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Discovery of giant planar Hall effect could herald a generation of "spintronics" devices

A basic discovery in magnetic semiconductors could result in a new generation of devices for sensors and memory applications -- and perhaps, ultimately, quantum computation -- physicists from the California Institute of Technology and the University of California at Santa Barbara have announced.

The new phenomenon, called the giant planar Hall effect, has to do with what happens when the spins of current-carrying electrons are manipulated. For several years scientists have been engaged in exploiting electron spin for the creation of a new generation of electronic devices --hence the term "spintronics" -- and the Caltech-UCSB breakthrough offers a new route to realizing such devices.

The term "spintronics" is used instead of "electronics" because the technology is based on a new paradigm, says Caltech physics professor Michael Roukes. Rather than merely using an electric current to make them work, spintronic devices will also rely on the magnetic orientation (or spin) of the electrons themselves. "In regular semiconductors, the spin freedom of the electrical current carriers does not play a role," says Roukes. "But in the magnetic semiconductors we've studied, the spin polarization -- that is, the magnetism -- of electrical current carriers is highly ordered. Consequently, it can act as an important factor in determining the current flow in the electrical devices."

In the naturally unpolarized state, there is no particular order between one electron's spin and its neighbor's. If the spins are aligned, the result can be a change in resistance to current flow.

Such changes in resistance have long been known for metals, but the current research is the first time that semiconductor material has been constructed in such a way that spin-charge interaction is manifested as a very dramatic change in resistivity. The Caltech-UCSB team managed to accomplish this by carefully preparing a ferromagnetic semiconductor material made of gallium manganese arsenide (GaMnAs). The widely-used current technology employs sandwiched magnetic metal structures used for magnetic storage.

"You have much more freedom with semiconductors than metals for two reasons," Roukes explains. "First, semiconductor material can be made compatible with the mainstream of semiconductor electronics; and second, there are certain phenomena in semiconductors that have no analogies in metals."

Practical applications of spintronics will likely include new paradigms in information storage, due to the superiority of such semiconductor materials to the currently available dynamic random access memory (or DRAM) chips. This is because the semiconductor spintronics would be "nonvolatile," meaning that once the spins were aligned, the system would be as robust as a metal bar that has been permanently magnetized.

The spintronics semiconductors could also conceivably be used in magnetic logic to replace transistors as switches in certain applications. In other words, spin alignment would be used as a logic gate for faster circuits with lower energy usage.

Finally, the technology could possibly be improved so that the quantum states of the spins themselves might be used for logic gates in future quantum computers. Several research teams have quantum logic gates, but the setup is the size of an entire laboratory, rather than at chip scale, and therefore still unsuitable for device integration. By contrast, a spintronics-based device might be constructed as a solid-state system that could be integrated into microchips.

A full description of the Caltech-UCSB team's work appeared in the March 14 issue of Physical Review Letters [Tang et al, Vol 90, 107201 (2003)]. The article is available by subscription, but the main site can be accessed at http://prl.aps.org/. This discovery is also featured in the "News and Views" section of the forthcoming issue of Nature Materials.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Pages

Subscribe to RSS - research_news