International Teams Set New Long-range Speed Record with Next-generation Internet Protocol

Scientists at the California Institute of Technology (Caltech) and the European Organization for Nuclear Research (CERN) have set a new Internet2 land speed record using the next-generation Internet protocol IPv6. The team sustained a single stream TCP rate of 983 megabits per second for more than one hour between the CERN facility in Geneva and Chicago, a distance of more than 7,000 kilometers. This is equivalent to transferring a full CD in 5.6 seconds.

The performance is remarkable because it overcomes two important challenges:

· IPv6 forwarding at Gigabit-per-second speeds · High-speed TCP performance across high bandwidth/latency networks.

This major step towards demonstrating how effectively IPv6 can be used should encourage scientists and engineers in many sectors of society to deploy the next-generation Internet protocol, the Caltech researchers say.

This latest record by Caltech and CERN is a further step in an ongoing research-and-development program to develop high-speed global networks as the foundation of next generation data-intensive grids. Caltech and CERN also hold the current Internet2 land speed record in the IPv4 class, where IPv4 is the traditional Internet protocol that carries 90 percent of the world's network traffic today. In collaboration with the Stanford Linear Accelerator Center (SLAC), Los Alamos National Laboratory, and the companies Cisco Systems, Level 3, and Intel, the team transferred one terabyte of data across 10,037 kilometers in less than one hour, from Sunnyvale, California, to Geneva, Switzerland. This corresponds to a sustained TCP rate of 2.38 gigabits per second for more than one hour.

Multi-gigabit-per-second IPv4 and IPv6 end-to-end network performance will lead to new research and business models. People will be able to form "virtual organizations" of planetary scale, sharing in a flexible way their collective computing and data resources. In particular, this is vital for projects on the frontiers of science and engineering, projects such as particle physics, astronomy, bioinformatics, global climate modeling, and seismology.

Harvey Newman, professor of physics at Caltech, said, "This is a major milestone towards our dynamic vision of globally distributed analysis in data-intensive, next-generation high-energy physics (HEP) experiments. Terabyte-scale data transfers on demand, by hundreds of small groups and thousands of scientists and students spread around the world, is a basic element of this vision; one that our recent records show is realistic. IPv6, with its increased address space and security features is vital for the future of global networks, and especially for organizations such as ours, where scientists from all world regions are building computing clusters on an increasing scale, and where we use computers including wireless laptop and mobile devices in all aspects of our daily work.

"In the future, the use of IPv6 will allow us to avoid network address translations (NAT) that tend to impede the use of video-advanced technologies for real-time collaboration," Newman added. "These developments also will empower the broader research community to use peer-to-peer and other advanced grid architectures in support of their computationally intensive scientific goals."

Olivier Martin, head of external networking at CERN and manager of the DataTAG project said, "These new records clearly demonstrate the maturity of IPv6 protocols and the availability of suitable off-the-shelf commercial products. They also establish the feasibility of transferring very large amounts of data using a single TCP/IP stream rather than multiple streams as has been customarily done until now by most researchers as a quick fix to TCP/IP's congestion avoidance algorithms. I am optimistic that the various research groups working on this issue will now quickly release new TCP/IP stacks having much better resilience to packet losses on long-distance multi-gigabit-per-second paths, thus allowing similar or even better records to be established across shared Internet backbones."

The team used the optical networking capabilities of the LHCnet, DataTAG, and StarLight and gratefully acknowledges support from the DataTAG project sponsored by the European Commission (EU Grant IST-2001-32459), the DOE Office of Science, High Energy and Nuclear Physics Division (DOE Grants DE-FG03-92-ER40701 and DE-FC02-01ER25459), and the National Science Foundation (Grants ANI 9730202, ANI-0230967, and PHY-0122557).

About the California Institute of Technology (Caltech):

With an outstanding faculty, including four Nobel laureates, and such off-campus facilities as Palomar Observatory, and the W. M. Keck Observatory, the California Institute of Technology is one of the world's major research centers. The Institute also conducts instruction in science and engineering for a student body of approximately 900 undergraduates and 1,000 graduate students who maintain a high level of scholarship and intellectual achievement. Caltech's 124-acre campus is situated in Pasadena, California, a city of 135,000 at the foot of the San Gabriel Mountains, approximately 30 miles inland from the Pacific Ocean and 10 miles northeast of the Los Angeles Civic Center. Caltech is an independent, privately supported university, and is not affiliated with either the University of California system or the California State Polytechnic universities. More information is available at http://www.caltech.edu.

About CERN:

CERN, the European Organization for Nuclear Research, has its headquarters in Geneva, Switzerland. At present, its member states are Austria, Belgium, Bulgaria, Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland, and the United Kingdom. Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission, and UNESCO have observer status. For more information, see http://www.cern.ch.

About the European Union DataTAG project:

The DataTAG is a project co-funded by the European Union, the U.S. Department of Energy, and the National Science Foundation. It is led by CERN together with four other partners. The project brings together the following European leading research agencies: Italy's Istituto Nazionale di Fisica Nucleare (INFN), France's Institut National de Recherche en Informatique et en Automatique (INRIA), the UK's Particle Physics and Astronomy Research Council (PPARC), and Holland's University of Amsterdam (UvA). The DataTAG project is very closely associated with the European Union DataGrid project, the largest grid project in Europe also led by CERN. For more information, see http://www.datatag.org.

 

Writer: 
Robert Tindol
Writer: 

Hydrogen economy might impactEarth's stratosphere, study shows

According to conventional wisdom, hydrogen-fueled cars are environmentally friendly because they emit only water vapor -- a naturally abundant atmospheric gas. But leakage of the hydrogen gas that can fuel such cars could cause problems for the upper atmosphere, new research shows.

In an article appearing this week in the journal Science, researchers from the California Institute of Technology report that the leaked hydrogen gas that would inevitably result from a hydrogen economy, if it accumulates, could indirectly cause as much as a 10-percent decrease in atmospheric ozone. The researchers are physics research scientist Tracey Tromp, Assistant Professor of Geochemistry John Eiler, planetary science professor Yuk Yung, planetary science research scientist Run-Lie Shia, and Jet Propulsion Laboratory scientist Mark Allen.

If hydrogen were to replace fossil fuel entirely, the researchers estimate that 60 to 120 trillion grams of hydrogen would be released each year into the atmosphere, assuming a 10-to-20-percent loss rate due to leakage. This is four to eight times as much hydrogen as is currently released into the atmosphere by human activity, and would result in doubling or tripling of inputs to the atmosphere from all sources, natural or human.

Because molecular hydrogen freely moves up and mixes with stratospheric air, the result would be the creation of additional water at high altitudes and, consequently, an increased dampening of the stratosphere. This in turn would result in cooling of the lower stratosphere and disturbance of ozone chemistry, which depends on a chain of chemical reactions involving hydrochloric acid and chlorine nitrate on water ice.

The estimates of potential damage to stratospheric ozone levels are based on an atmospheric modeling program that tests the various scenarios that might result, depending on how much hydrogen ends up in the stratosphere from all sources, both natural and anthropogenic.

Ideally, a hydrogen fuel-cell vehicle has no environmental impact. Energy is produced by combining hydrogen with oxygen pulled from the atmosphere, and the tailpipe emission is water. The hydrogen fuel could come from a number of sources (Iceland recently started pulling it out of the ground). Nuclear power could be used to generate the electricity needed to split water, and in principle, the electricity needed could also be derived from renewable sources such as solar of wind power.

By comparison, the internal combustion engine uses fossil fuels and produces many pollutants, including soot, noxious nitrogen and sulfur gases, and the "greenhouse gas" carbon dioxide. While a hydrogen fuel-cell economy would almost certainly improve urban air quality, it has the potential unexpected consequences due to the inevitable leakage of hydrogen from cars, hydrogen production facilities, the transportation of the fuel.

Uncertainty remains about the effects on the atmosphere because scientists still have a limited understanding of the hydrogen cycle. At present, it seems likely such emissions could accumulate in the air. Such a build-up would have several consequences, chief of which would be a moistening and cooling of the upper atmosphere and, indirectly, destruction of ozone.

In this respect, hydrogen would be similar to the chlorofluorocarbons (once the standard substance used for air conditioning and refrigeration), which were intended to be contained within their devices, but which in practice leaked into the atmosphere and attacked the stratospheric ozone layer.

The authors of the Science article say that the current situation is unique in that society has the opportunity to understand the potential environmental impact well ahead of the growth of a hydrogen economy. This contrasts with the cases of atmospheric carbon dioxide, methyl bromide, CFCs, and lead, all of which were released into the environment by humans long before their consequences were understood.

"We have an unprecedented opportunity this time to understand what we're getting into before we even switch to the new technology," says Tromp, the lead author. "It won't be like the case with the internal-combustion engine, when we started learning the effects of carbon dioxide decades later."

The question of whether or not hydrogen is bad for the environment hinges on whether the planet has the ability to consume excess anthropogenic hydrogen, explains Eiler. "This man-made hydrogen will either be absorbed in the soil -- a process that is still poorly understood but likely free of environmental consequences -- or react with other compounds in the atmosphere.

"The balance of these two processes will be key to the outcome," says Eiler. "If soils dominate, a hydrogen economy might have little effect on the environment. But if the atmosphere is the big player, the stratospheric cooling and destruction of ozone modeled in this Science paper are more likely to occur.

"Determining which of these two processes dominates should be a solvable problem," states Eiler, whose research group is currently exploring the natural budget of hydrogen using new isotopic techniques.

"Understanding the effects of hydrogen on the environment now should help direct the technologies that will be the basis of a hydrogen economy," Tromp adds. "If hydrogen emissions present an environmental hazard, then recognizing that hazard now can help guide investments in technologies to favor designs that minimize leakage.

"On the other hand, if hydrogen is shown to be environmentally friendly in every respect, then designers could pursue the most cost-effective technologies and potentially save billions in needless safeguards."

"Either way, it's good for society that we have an emission scenario at this stage," says Eiler. "In past cases -- with chlorofluorocarbons, nitrogen oxides, methane, methyl bromide, carbon dioxide, and carbon monoxide -- we always found out that there were problems long after they were in common use. But this time, we have a unique opportunity to study the anthropogenic implications of a new technology before it's even a problem."

If hydrogen indeed turns out to be bad for the ozone layer, should the transition to hydrogen-fueled cars be abandoned? Not necessarily, Tromp and Eiler claim.

"If it's the best way to provide a new energy source for our needs, then we can, and probably should, do it," Tromp says.

Eiler adds, "If we had had perfect foreknowledge of the effects of carbon dioxide a hundred years ago, would we have abandoned the internal combustion engine? Probably not. But we might have begun the process of controlling CO2 emissions earlier."

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Astronomers "weigh" pulsar's planets

For the first time, the planets orbiting a pulsar have been "weighed" by measuring precisely variations in the time it takes them to complete an orbit, according to a team of astronomers from the California Institute of Technology and Pennsylvania State University.

Reporting at the summer meeting of the American Astronomical Society, Caltech postdoctoral researcher Maciej Konacki and Penn State astronomy professor Alex Wolszczan announced today that masses of two of the three known planets orbiting a rapidly spinning pulsar 1,500 light-years away in the constellation Virgo have been successfully measured. The planets are 4.3 and 3.0 times the mass of Earth, with an error of 5 percent.

The two measured planets are nearly in the same orbital plane. If the third planet is co-planar with the other two, it is about twice the mass of the moon. These results provide compelling evidence that the planets must have evolved from a disk of matter surrounding the pulsar, in a manner similar to that envisioned for planets around sun-like stars, the researchers say.

The three pulsar planets, with their orbits spaced in an almost exact proportion to the spacings between Mercury, Venus, and Earth, comprise a planetary system that is astonishingly similar in appearance to the inner solar system. They are clearly the precursors to any Earth-like planets that might be discovered around nearby sun-like stars by the future space interferometers such as the Space Interferometry Mission or the Terrestrial Planet Finder.

"Surprisingly, the planetary system around the pulsar 1257+12 resembles our own solar system more than any extrasolar planetary system discovered around a sun-like star," Konacki said. "This suggests that planet formation is more universal than anticipated."

The first planets orbiting a star other than the sun were discovered by Wolszczan and Frail around an old, rapidly spinning neutron star, PSR B1257+12, during a large search for pulsars conducted in 1990 with the giant, 305-meter Arecibo radio telescope. Neutron stars are often observable as radio pulsars, because they reveal themselves as sources of highly periodic, pulse-like bursts of radio emission. They are extremely compact and dense leftovers from supernova explosions that mark the deaths of massive, normal stars.

The exquisite precision of millisecond pulsars offers a unique opportunity to search for planets and even large asteroids orbiting the pulsar. This "pulsar timing" approach is analogous to the well-known Doppler effect so successfully used by optical astronomers to identify planets around nearby stars. Essentially, the orbiting object induces reflex motion to the pulsar which result in perturbing the arrival times of the pulses. However, just like the Doppler method, the pulsar timing method is sensitive to stellar motions along the line-of-sight, the pulsar timing can only detect pulse arrival time variations caused by a pulsar wobble along the same line. The consequence of this limitation is that one can only measure a projection of the planetary motion onto the line-of-sight and cannot determine the true size of the orbit.

Soon after the discovery of the planets around PSR 1257+12, astronomers realized that the heavier two must interact gravitationally in a measurable way, because of a near 3:2 commensurability of their 66.5- and 98.2-day orbital periods. As the magnitude and the exact pattern of perturbations resulting from this near-resonance condition depend on a mutual orientation of planetary orbits and on planet masses, one can, in principle, extract this information from precise timing observations.

Wolszczan showed the feasibility of this approach in 1994 by demonstrating the presence of the predicted perturbation effect in the timing of the planet pulsar. In fact, it was the first observation of such an effect beyond the solar system, in which resonances between planets and planetary satellites are commonly observed. In recent years, astronomers have also detected examples of gravitational interactions between giant planets around normal stars.

Konacki and Wolszczan applied the resonance-interaction technique to the microsecond-precision timing observations of PSR B1257+12 made between 1990 and 2003 with the giant Arecibo radio telescope. In a paper to appear in the Astrophysical Journal Letters, they demonstrate that the planetary perturbation signature detectable in the timing data is large enough to obtain surprisingly accurate estimates of the masses of the two planets orbiting the pulsar.

The measurements accomplished by Konacki and Wolszczan remove a possibility that the pulsar planets are much more massive, which would be the case if their orbits were oriented more "face-on" with respect to the sky. In fact, these results represent the first unambiguous identification of Earth-sized planets created from a protoplanetary disk beyond the solar system.

Wolszczan said, "This finding and the striking similarity of the appearance of the pulsar system to the inner solar system provide an important guideline for planning the future searches for Earth-like planets around nearby stars."

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Why Fearful Animals Flee—or Freeze

PASADENA, Calif. –In most old-fashioned black-and-white horror flicks, it always seems there's some hapless hero or heroine who gets caught up in a life-threatening situation. Instead of making the obvious choice--to run like hell--he/she freezes in place. That decision, alas, leads to their ultimate demise.

While their fate was determined by bad scriptwriting, scientists already know that in real life, environment and experience influence defensive behaviors. Less understood are the neural circuits that determine such decisions. Now, in an article in the May 1 issue of the Journal of Neuroscience, researchers at the California Institute of Technology have developed an experimental model using mice that can map and manipulate the neural circuits involved in such innate behaviors as fear.

Raymond Mongeau, Gabriel A. Miller, Elizabeth Chiang, and David J. Anderson, in work performed at Caltech, manipulated either a flight or freeze reaction in mice through the use of an ultrasonic auditory stimulus, and further, were able to alter the mouse's behavior by making simple changes in the animal's environment. They also found that flight and freezing are negatively correlated, suggesting that a kind of competition exists between these alternative defensive motor responses. Finally, they have begun to map the potential circuitry in the brain that controls this competition.

"Fear and anxiety are important emotions, especially in this day and age," says Anderson, a Caltech professor of biology and an investigator with the Howard Hughes Medical Institute. "We know a lot about how the brain processes fear that is learned, but much less is known about innate or unlearned fear. Our results open the way to better understanding how the brain processes innately fearful stimuli, and how and where anxiety affects the brain to influence behavior."

Using the ultrasonic cue, the researchers were able to predict and manipulate the animal's reaction to a fearful situation. They found that mice exposed to the ultrasonic stimulus in their home cage (a familiar environment) predominantly displayed a flight response. Those placed in a new cage (an unfamiliar environment), or treated with foot shocks the previous day, primarily displayed freezing and less flight.

Anderson noted that in previous fear "conditioning" experiments, where mice learn to fear a neutral tone associated with a footshock, the animals show only freezing behavior and never flight, even though in the wild, flight is a normal and important fear response to predators. This suggests that the ultrasonic stimulus used by Anderson and colleagues is tapping into brain circuits that mediate natural, or innate, fear responses that include flight as well as freezing.

What causes the shift from flight to freezing behavior? Probably high anxiety and stress, say the authors, caused by an unfamiliar environment or the foot shocks. The researchers suggest that freezing requires a higher threshold level of anticipatory fear (the heroine inside a dark, spooky house) before it can be elicited by the ultrasound.

Most brain researchers believe the brain uses a hierarchy of neural systems to determine which defensive behaviors, like flight or freezing, to use. These range from an evolutionary older neural system that generates "quick and dirty" defensive strategies, to more evolved systems that produce slower but more sophisticated reactions. These systems are known to interact, but the neural mechanisms that decide which response wins out are not understood.

One of the goals of their work was to map the brain regions that control the behaviors triggered by the fear stimulus, to observe whether any change in brain activity correlated with the different defensive behaviors. They achieved this, all the way down to the resolution of a single neuron, by mapping the expression pattern of the c-FOS gene, a so-called "immediate early gene" that is turned on when neurons are excited. The switching on of the c-FOS gene can therefore be used as an indication of neuronal activation.

A map of the c-FOS expression patterns during flight vs. freezing revealed that mice displaying freezing behavior had neural activity in different regions of the brain than those that fled. Some of these regions were previously known to inhibit each other, providing a possible explanation for the apparent competition between flight and freezing observed in the intact animal.

Anderson notes that more work needs to be done to pin down where and how anxiety modifies defensive behavior. "This system may also provide a useful model for understanding the neural substrates of human fear disorders, like panic and anxiety," says Anderson, "as well as provide a model for developing drugs to treat them."

Contact: Mark Wheeler (626) 395-8733 wheel@caltech.edu

Visit the Caltech Media Relations Website at http://pr.caltech.edu/media

###

Writer: 
MW
Writer: 

Caltech biology professor to directresearch program on brain signaling

California Institute of Technology biologist Mary Kennedy has been named project director for a $4 million federal project grant to better understand how the brain processes signals. Progress could lead to new insights into how drugs can be better custom-designed to treat a host of neurodegenerative disorders, mental illnesses, and disabilities, including Alzheimer's disease, depression, and schizophrenia.

The funding will come from the National Institute of Neurological Disorders and Stroke, a component of the National Institutes of Health (NIH). According to Kennedy, who is the Allen and Lenabelle Davis Professor of Biology at Caltech, the five-year project is innovative because it will integrate advanced computational methods with experiments to better analyze and model calcium signaling in the brain. In addition to Kennedy's research group at Caltech, the program will involve research teams from the Salk Institute, Cold Spring Harbor Laboratory, and the University of North Carolina.

"Another aspect of this research that is quite new is the application of these kinds of methods at the molecular level," she says. "This is important because, for about 20 years or so, it wasn't really possible to be rigorously quantitative about the biochemical functions of synapses at the molecular level. This was because we didn't know all the molecules that were involved."

With new advances, especially the completion of the Human Genome Project, it is now time for a new phase in research on the molecular mechanisms of brain functions, according to Kennedy. In addition to basic improvements in knowledge of how brain signaling works, the research program could also lead indirectly to pharmaceutical advances.

"Neurological and mental diseases result, in part, from derangements in regulation of synaptic transmission," Kennedy says. "In a type of neuronal structure known as dendritic spines -- so named because they sort of look like spines -- calcium influx through a certain type of receptor is a principal regulator of synaptic strength, or plasticity. Thus, calcium can lead to increases or decreases, of varying durations, in synaptic strength."

The program includes four projects and a core that will provide new computer software. One project will use a computer program called MCell to develop and test models of calcium dynamics in spines. Another will rely on microscopy to study the organization of calcium sources and sinks in spines, as well as calcium distribution. A third, which will be centered in Kennedy's lab, will develop and test kinetic models of enzymes regulated by calcium; and a fourth will use advanced imaging techniques to measure calcium signals and their regulation in individual spines.

The program will be highly interdisciplinary, Kennedy says. Three physicists will be among the team members in her lab. Work at the other institutions, as well, will involve specialists from disciplines outside biology.

"Once we have a better quantitative understanding of signaling, it will be possible to ask much 'cleaner' questions about what kind of drugs will treat certain conditions, and under what circumstances."

###

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

New Insight Into How Flies Fly

PASADENA, Calif. –How does a fly fly and why should we care? To the first, says Michael Dickinson, a professor of bioengineering at the California Institute of Technology, the short answer is different from what we have thought, and he and his colleagues used a dynamically-scaled flapping robot (aka Robofly), a free flight arena (aka Fly-O-Rama), and a 3D, infrared visual flight simulator (Fly-O-Vision) to prove it.

And we should care, says Dickinson, because the simple motion of a flying fly links a series of fundamental and complex processes within both the physical and biological sciences. Studying a fly may eventually lead to a model that will provide insight into the behavior and robustness of complex systems in general, and, for roboticists, may help them in the design of flying robots that mimic nature.

In a paper entitled "The Aerodynamics of Free Flight Maneuvers in Drosophila," Steven Fry of the University of Zurich, Rosalyn Sayaman, a Caltech research assistant, and Dickinson show how tiny insects use their wings to generate enough torque to overcome inertia, and not--as conventional wisdom has held--friction. The paper will appear in the April 18 issue of the journal Science.

Flies and other dipterans (insects within the family that includes houseflies, hoverflies, and fruit flies), are capable of making rapid 90-degree turns, called saccades, at "extraordinary" speeds, says Dickinson, less than 50-thousandths of a second. That's faster, he says, "than a human eye can blink." To make the turn, a fly must generate enough torque, or twisting force, to offset two forces working against it--the inertia of its own body and the viscous friction of air.

Until now, it's always been assumed that viscosity, a resistance to flow, is the enemy for small critters, while inertia is the bane of larger animals like birds. But the theory has never been tested.

To study the aerodynamics of active flight maneuvers, the researchers employed infrared, three-dimensional, high-speed video (the Fly-O-Vision) to capture the fruit fly, Drosophila melanogaster, performing saccades in free flight. The animals were released in a large, enclosed arena (the Fly-O-Rama), and lured toward a vertical cylinder laced with a drop of vinegar. As the flies approach the cylinder, it looms within their field of view, triggering a rapid turn that helps the fly avoid a collision.

Many flies performed saccades within the intersecting fields of view of the three cameras, which allowed the researchers to film the turn, measure the wing and body position throughout the maneuver, and calculate the velocity of its path.

The improved resolution of the 3D video showed that, despite its small size and slow speed (relative to other animals), the fly performed a banked turn, similar to those observed in larger fly species, first accelerating, then slowing as it changed heading, then accelerating again at the end of the turn. This suggests that the time and velocity of the small fly are dominated by body inertia and not friction.

To see if the measured patterns of wing motion were sufficient to explain the saccades, the researchers played the sequences through a dynamically scaled robotic model (you guessed it, Robofly) to measure the aerodynamic forces as they vary by time. They found that the time and torque they calculated based on the fly's body morphology and body motion from the video matched "amazingly well," says Dickinson, with the calculations derived from the wing motion of the robot. These results, he notes, further support the notion that even in small insects the torques created by the wings act primarily to overcome inertia and not friction.

Although these experiments were performed on tiny fruit flies, says Dickinson, the results impact nearly all insects, because the importance of inertia over friction increases with the size of the animal. The results also provide a basis for future research on the neural and mechanical basis of insect flight, and, for roboticists, may offer insights for the design of biomimetic flying devices. It may also yield a little respect for the common fly. As Rosalyn Sayaman puts it on her web page, "I now love flies. I used to just shoo and swat. Now, I can't even swat anymore."

Note to Editors: Video and still photos are available.

Contact: Mark Wheeler (626) 395-8733 wheel@caltech.edu

Visit the Caltech Media Relations Website at http://pr.caltech.edu/media

###

Writer: 
MW
Writer: 

Astronomers find new evidence aboutuniverse's heaviest phase of star formation

New distance measurements from faraway galaxies further strengthen the view that the strongest burst of star formation in the universe occurred about two billion years after the Big Bang.

Reporting in the April 17 issue of the journal Nature, California Institute of Technology astronomers Scott Chapman and Andrew Blain, along with their United Kingdom colleagues Ian Smail and Rob Ivison, provide the redshifts of 10 extremely distant galaxies which strongly suggest that the most luminous galaxies ever detected were produced over a rather short period of time. Astronomers have long known that certain galaxies can be seen about a billion years after the Big Bang, but a relatively recent discovery of a type of extremely luminous galaxy -- one that is very faint in visible light, but much brighter at longer wavelengths -- is the key to the new results.

This type of galaxy was first found in 1997 using a new and much more sensitive camera for observing at submillimeter wavelengths (longer than the wavelengths of visible light that allows us to see, but somewhat shorter than radio waves). The camera was attached to the James Clerk Maxwell Telescope (JCMT), on Mauna Kea in Hawaii.

Submillimeter radiation is produced by warm galactic "dust" -- micron-sized solid particles similar to diesel soot that are interspersed between the stars in galaxies. Based on their unusual spectra, experts have thought it possible that these "submillimeter galaxies" could be found even closer in time to the Big Bang.

Because the JCMT cannot see details of the sky that are as fine as details seen by telescopes operating at visible and radio wavelengths, and because the submillimeter galaxies are very faint, researchers have had a hard time determining the precise locations of the submillimeter galaxies and measuring their distances. Without an accurate distance, it is difficult to tell how much energy such galaxies produce; and with no idea of how powerful they are, it is uncertain how important such galaxies are in the universe.

The new results combine the work of several instruments, including the Very Large Array in New Mexico (the world's most sensitive radio telescope), and one of the 10-meter telescopes at the W. M. Keck Observatory on Mauna Kea, which are the world's largest optical telescopes. These instruments first pinpointed the position of the submillimeter galaxies, and then measured their distances. Today's article in Nature reports the first 10 distances obtained.

The Keck telescope found the faint spectral signature of radiation that is emitted, at a single ultraviolet wavelength of 0.1215 micrometers, by hydrogen gas excited by either a large number of hot, young stars or by the energy released as matter spirals into a black hole at the core of a galaxy. The radiation is detected at a longer, redder wavelength, having been Doppler shifted by the rapid expansion of the universe while the light has been traveling to Earth.

All 10 of the submillimeter galaxies that were detected emitted the light that we see today when the universe was less than half its present age. The most distant produced its light only two billion years after the Big Bang (12 billion years ago). Thus, the submillimeter galaxies are now confirmed to be the most luminous type of galaxies in the universe, several hundred times more luminous than our Milky Way, and 10 trillion times more luminous than the sun.

It is likely that the formation of such extreme objects had to wait for a certain size of a galaxy to grow from an initially almost uniform universe and to become enriched with carbon, silicon, and oxygen from the first stars. The time when the submillimeter galaxies shone brightly can also provide information about how the sizes and makeup of galaxies developed at earlier times.

By detecting these galaxies, the Caltech astronomers have provided an accurate census of the most extreme galaxies in the universe at the peak of their activity and witnessed the most dramatic period of star buildup yet seen in the Milky Way and nearby galaxies. Now that their distances are known accurately, other measurements can be made to investigate the details of their power source, and to find out what galaxies will result when their intense bursts of activity come to an end.

James Clerk Maxwell Telescope is at http://www.jach.hawaii.edu/JACpublic/JCMT The Very Large Array is at http://www.aoc.nrao.edu/vla/html. Keck Observatory is at http:/www.astro.caltech.edu/mirror/keck/index.html

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Discovery of giant planar Hall effect could herald a generation of "spintronics" devices

A basic discovery in magnetic semiconductors could result in a new generation of devices for sensors and memory applications -- and perhaps, ultimately, quantum computation -- physicists from the California Institute of Technology and the University of California at Santa Barbara have announced.

The new phenomenon, called the giant planar Hall effect, has to do with what happens when the spins of current-carrying electrons are manipulated. For several years scientists have been engaged in exploiting electron spin for the creation of a new generation of electronic devices --hence the term "spintronics" -- and the Caltech-UCSB breakthrough offers a new route to realizing such devices.

The term "spintronics" is used instead of "electronics" because the technology is based on a new paradigm, says Caltech physics professor Michael Roukes. Rather than merely using an electric current to make them work, spintronic devices will also rely on the magnetic orientation (or spin) of the electrons themselves. "In regular semiconductors, the spin freedom of the electrical current carriers does not play a role," says Roukes. "But in the magnetic semiconductors we've studied, the spin polarization -- that is, the magnetism -- of electrical current carriers is highly ordered. Consequently, it can act as an important factor in determining the current flow in the electrical devices."

In the naturally unpolarized state, there is no particular order between one electron's spin and its neighbor's. If the spins are aligned, the result can be a change in resistance to current flow.

Such changes in resistance have long been known for metals, but the current research is the first time that semiconductor material has been constructed in such a way that spin-charge interaction is manifested as a very dramatic change in resistivity. The Caltech-UCSB team managed to accomplish this by carefully preparing a ferromagnetic semiconductor material made of gallium manganese arsenide (GaMnAs). The widely-used current technology employs sandwiched magnetic metal structures used for magnetic storage.

"You have much more freedom with semiconductors than metals for two reasons," Roukes explains. "First, semiconductor material can be made compatible with the mainstream of semiconductor electronics; and second, there are certain phenomena in semiconductors that have no analogies in metals."

Practical applications of spintronics will likely include new paradigms in information storage, due to the superiority of such semiconductor materials to the currently available dynamic random access memory (or DRAM) chips. This is because the semiconductor spintronics would be "nonvolatile," meaning that once the spins were aligned, the system would be as robust as a metal bar that has been permanently magnetized.

The spintronics semiconductors could also conceivably be used in magnetic logic to replace transistors as switches in certain applications. In other words, spin alignment would be used as a logic gate for faster circuits with lower energy usage.

Finally, the technology could possibly be improved so that the quantum states of the spins themselves might be used for logic gates in future quantum computers. Several research teams have quantum logic gates, but the setup is the size of an entire laboratory, rather than at chip scale, and therefore still unsuitable for device integration. By contrast, a spintronics-based device might be constructed as a solid-state system that could be integrated into microchips.

A full description of the Caltech-UCSB team's work appeared in the March 14 issue of Physical Review Letters [Tang et al, Vol 90, 107201 (2003)]. The article is available by subscription, but the main site can be accessed at http://prl.aps.org/. This discovery is also featured in the "News and Views" section of the forthcoming issue of Nature Materials.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Science begins for LIGO in questto detect gravitational waves

Armed with one of the most advanced scientific instruments of all time, physicists are now watching the universe intently for the first evidence of gravitational waves. First predicted by Albert Einstein in 1916 as a consequence of the general theory of relativity, gravitational waves have never been detected directly.

In Einstein's theory, alterations in the shape of concentrations of mass (or energy) have the effect of warping space-time, thereby causing distortions that propagate through the universe at the speed of light. A new generation of detectors, led by the Laser Interferometer Gravitational-Wave Observatory (LIGO), is coming into operation and promises sensitivities that will be capable of detecting a variety of catastrophic events, such as the gravitational collapse of stars or the coalescence of compact binary systems.

The commissioning of LIGO and improvements in the sensitivity are coming very rapidly, as the final interferometer systems are implemented and the limiting noise sources are uncovered and mitigated. In fact, the commissioning has made such rapid progress that LIGO is already capable of performing some of the most sensitive searches ever undertaken for gravitational waves. A similar device in Hannover, Germany (a German–U.K. collaboration known as GEO) is also getting underway, and these instruments are being used together as the initial steps in building a worldwide network of gravitational-wave detectors.

The first data was taken during a 17-day data run in September 2002. That data has now been analyzed for the presence of gravitational waves, and results are being presented at the American Physical Society meeting in Philadelphia. No sources have yet been detected, but new limits on gravitational radiation from such sources as binary neutron star inspirals, selected pulsars in our galaxy and background radiation from the early universe, are reported.

Realistically, detections are not expected at the present sensitivities. A second data run is now underway with significantly better sensitivity, and further improvements are expected over the next couple of years.

As the initial LIGO interferometers start to put new limits on gravitational-wave signals, the LIGO Lab, the LIGO Scientific Collaboration, and international partners are proposing an advanced LIGO to improve the sensitivity by more than a factor of 10 beyond the goals of the present instrument. It is anticipated that this new instrument may see gravitational-wave sources as often as daily, with excellent signal strengths, allowing details of the waveforms to be read off and compared with theories of neutron stars, black holes, and other highly relativistic objects. The improvement of sensitivity will allow the one-year planned observation time of the initial LIGO to be equaled in a matter of hours. The National Science Foundation has supported LIGO, and collaboration between Caltech and MIT were responsible for its construction. A scientific community of more than 400 scientists from around the world are now involved in research at LIGO.

Writer: 
RT

Caltech applied physicists invent waveguideto bypass diffraction limits for new optical devices

Four hundred years ago, a scientist could peer into one of the newfangled optical microscopes and see microorganisms, but nothing much smaller. Nowadays, a scientist can look in the latest generation of lens-based optical microscopes and also see, well, microorganisms, but nothing much smaller. The limiting factor has always been a fundamental property of the wave nature of light that fuzzes out images of objects much smaller than the wavelength of the light that illuminates those objects. This has hampered the ability to make and use optical devices smaller than the wavelength. But a new technological breakthrough at the California Institute of Technology could sidestep this longstanding barrier.

Caltech applied physicist Harry Atwater and his associates have announced their success in creating "the world's smallest waveguide, called a plasmon waveguide, for the transport of energy in nanoscale systems." In essence, they have created a sort of "light pipe" constructed of a chain-array of several dozen microscopic metal slivers that allows light to hop along the chain and circumvent the diffraction limit. With such technology, there is the clear possibility that optical components can be constructed for a huge number of technological applications in which the diffraction limit is troublesome.

"What this represents is a fundamentally new approach for optical devices in which diffraction is not a limit," says Atwater.

Because the era of nanoscale devices is rapidly approaching, Atwater says, the future bodes well for extremely tiny optical devices that, in theory, would be able to connect to molecules and someday even to individual atoms.

At present, the Atwater team's plasmon waveguide looks something like a standard glass microscope slide. Fabricated on the glass plate by means of electron beam lithography is a series of nanoparticles, each about 30 nanometers (30 billionths of a meter, in other words) in width, about 30 nanometers in height, and about 90 nanometers in length. These etched "rods" are arranged in a parallel series like railroad ties, with such a tiny space between them that light energy can move along with very little radiated loss.

Therefore, if light with a wavelength of 590 nanometers, for example, passes through the nanoparticles, the light is confined to the smaller dimensions of the nanoparticles themselves. The light energy then "hops" between the individual elements in a process known as dipole-dipole coupling, at a rate of propagation considerably slower than the speed of light in a vacuum.

In addition to their functionality as miniature optical waveguides, these structures are also sensitive to the presence of biomolecules. Thus, a virus or even a single molecule of nerve gas could conceivably be detected with an optical device designed for biowarfare sensing. The potential applications include electronic devices that could detect single molecules of a pathogen, for example.

The ultrasmall waveguide could also be used to optically interconnect to electronic devices, because individual transistors on a microchip are already too small to be seen in a conventional optical microscope.

A description of the device will appear in the April 2003 issue of the journal Nature Materials. The other Caltech authors of the paper were Stefan A. Maier, a former graduate student and now postdoctoral researcher at Caltech, who was responsible for the working device, and Pieter G. Kik, also a postdoctoral researcher. Other authors were Sheffer Meltzer, Elad Harel, Bruce E. Koel, and Ari A.G. Requicha, all from the University of Southern California.

The nanoparticle structures were fabricated at the Jet Propulsion Laboratory's facility for electron beam lithography, with the help of JPL employees Richard Muller, Paul Maker, and Pierre Echternach.

The research was sponsored by the Air Force Office of Scientific Research and was also supported in part by grants from the National Science Foundation and Caltech's Center for Science and Engineering of Materials.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Pages

Subscribe to RSS - research_news