Astronomers "weigh" pulsar's planets

For the first time, the planets orbiting a pulsar have been "weighed" by measuring precisely variations in the time it takes them to complete an orbit, according to a team of astronomers from the California Institute of Technology and Pennsylvania State University.

Reporting at the summer meeting of the American Astronomical Society, Caltech postdoctoral researcher Maciej Konacki and Penn State astronomy professor Alex Wolszczan announced today that masses of two of the three known planets orbiting a rapidly spinning pulsar 1,500 light-years away in the constellation Virgo have been successfully measured. The planets are 4.3 and 3.0 times the mass of Earth, with an error of 5 percent.

The two measured planets are nearly in the same orbital plane. If the third planet is co-planar with the other two, it is about twice the mass of the moon. These results provide compelling evidence that the planets must have evolved from a disk of matter surrounding the pulsar, in a manner similar to that envisioned for planets around sun-like stars, the researchers say.

The three pulsar planets, with their orbits spaced in an almost exact proportion to the spacings between Mercury, Venus, and Earth, comprise a planetary system that is astonishingly similar in appearance to the inner solar system. They are clearly the precursors to any Earth-like planets that might be discovered around nearby sun-like stars by the future space interferometers such as the Space Interferometry Mission or the Terrestrial Planet Finder.

"Surprisingly, the planetary system around the pulsar 1257+12 resembles our own solar system more than any extrasolar planetary system discovered around a sun-like star," Konacki said. "This suggests that planet formation is more universal than anticipated."

The first planets orbiting a star other than the sun were discovered by Wolszczan and Frail around an old, rapidly spinning neutron star, PSR B1257+12, during a large search for pulsars conducted in 1990 with the giant, 305-meter Arecibo radio telescope. Neutron stars are often observable as radio pulsars, because they reveal themselves as sources of highly periodic, pulse-like bursts of radio emission. They are extremely compact and dense leftovers from supernova explosions that mark the deaths of massive, normal stars.

The exquisite precision of millisecond pulsars offers a unique opportunity to search for planets and even large asteroids orbiting the pulsar. This "pulsar timing" approach is analogous to the well-known Doppler effect so successfully used by optical astronomers to identify planets around nearby stars. Essentially, the orbiting object induces reflex motion to the pulsar which result in perturbing the arrival times of the pulses. However, just like the Doppler method, the pulsar timing method is sensitive to stellar motions along the line-of-sight, the pulsar timing can only detect pulse arrival time variations caused by a pulsar wobble along the same line. The consequence of this limitation is that one can only measure a projection of the planetary motion onto the line-of-sight and cannot determine the true size of the orbit.

Soon after the discovery of the planets around PSR 1257+12, astronomers realized that the heavier two must interact gravitationally in a measurable way, because of a near 3:2 commensurability of their 66.5- and 98.2-day orbital periods. As the magnitude and the exact pattern of perturbations resulting from this near-resonance condition depend on a mutual orientation of planetary orbits and on planet masses, one can, in principle, extract this information from precise timing observations.

Wolszczan showed the feasibility of this approach in 1994 by demonstrating the presence of the predicted perturbation effect in the timing of the planet pulsar. In fact, it was the first observation of such an effect beyond the solar system, in which resonances between planets and planetary satellites are commonly observed. In recent years, astronomers have also detected examples of gravitational interactions between giant planets around normal stars.

Konacki and Wolszczan applied the resonance-interaction technique to the microsecond-precision timing observations of PSR B1257+12 made between 1990 and 2003 with the giant Arecibo radio telescope. In a paper to appear in the Astrophysical Journal Letters, they demonstrate that the planetary perturbation signature detectable in the timing data is large enough to obtain surprisingly accurate estimates of the masses of the two planets orbiting the pulsar.

The measurements accomplished by Konacki and Wolszczan remove a possibility that the pulsar planets are much more massive, which would be the case if their orbits were oriented more "face-on" with respect to the sky. In fact, these results represent the first unambiguous identification of Earth-sized planets created from a protoplanetary disk beyond the solar system.

Wolszczan said, "This finding and the striking similarity of the appearance of the pulsar system to the inner solar system provide an important guideline for planning the future searches for Earth-like planets around nearby stars."

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Caltech Faculty Member Named Scientist of the Year

PASADENA, Calif. — The California Science Center has announced the joint selection of Andrew Lange and Saul Perlmutter as 2003 California Scientist of the Year.

Lange is Marvin L. Goldberger Professor of Physics at the California Institute of Technology in Pasadena, and Perlmutter is senior scientist and group leader at the Lawrence Berkeley National Laboratory in Berkeley. Using two very different techniques, Lange and Perlmutter's experimental efforts have confirmed a remarkable theory of how the universe expanded and evolved after the "big bang."

Lange and Perlmutter will be recognized during the annual presentation of the California Scientist of the Year and the Amgen Award for Science Teaching Excellence, a special event to honor excellence in scientific achievement and education, on May 8 at the California Science Center in Exposition Park, Los Angeles.

Lange is the 14th Caltech faculty member to be named Scientist of the Year.

The California Science Center established the California Scientist of the Year Award in recognition of the prominent role California plays in the areas of scientific and technological development. A blue-ribbon panel selects a nominee whose work is current and advances the boundaries of any field of science. Of those selected for California Scientist of the Year honors, 11 later became Nobel laureates. The panel concluded that Lange and Perlmutter's discoveries complement each other so well in revealing the nature of the universe that both scientists should be recognized this year.

According to the most widely held theory of cosmic evolution, the universe went though an inflationary phase during which its size rapidly increased and the universe's geometrical structure took on a very specific form: parallel lines never meet, and the sum of the angles inside an astronomically sized triangle add up to 180 degrees. Scientists refer to this particular form of geometry as being mathematically "flat." According to the general theory of relativity, a mathematically flat universe places constraints on the amount of mass and energy in the universe. Unfortunately, astronomers could not account for the requisite mass and energy. Therefore, either the standard cosmological or—"big bang"—theory was incorrect and the universe's geometrical structure was not that of Euclid, or the astronomers were missing something important.

Lange studies fluctuations in the cosmic microwave background (CMB) radiation, a relic of the primeval "fireball" that filled the early universe. These signals, which are visible today at microwave frequencies, provide a clear "snapshot" of the embryonic universe at an epoch long before the first stars or galaxies had formed. In general, this radiation reaches the earth uniformly from all directions in the sky. However, at the level of 0.003 percent there is an intricate pattern of fluctuations in the CMB. Using novel detectors developed at the Jet Propulsion Laboratory and flown on a balloon-borne telescope high above Antarctica, Lange's group was able to make the first resolved images of these very faint patterns. The images demonstrate that the radiation fluctuates on an angular scale of one degree, which is exactly what scientists expected from a mathematically flat universe. Since the 1930s, scientists have known that galaxies are moving away from one another, and there has been a concerted effort to study the rate of this expansion. Prior to Perlmutter's efforts, almost all astronomers expected that the expansion of the universe was slowing, due to the gravitational attraction of galaxies and other matter. However, Perlmutter's group found that the universe is actually expanding at an accelerating rate, as if a "negative pressure" were pushing everything apart. This negative pressure may be what scientists call the cosmological constant, first hypothesized by Albert Einstein in an attempt to prescribe a stable universe but later rejected by him. Perlmutter's estimates of the cosmological constant's magnitude are consistent with Lange's observations of a flat universe.

Lange's work demonstrates that the universe is mathematically flat, and that the standard cosmological theory is correct, while Perlmutter's work indicates that the source of astronomical energy giving rise to a flat universe comes from a type of negative gravitational pressure or dark energy permeating the universe. The nature of this dark energy remains a mystery.

# # #

MEDIA CONTACT: Jill Perry, Media Relations Director (626) 395-3226 jperry@caltech.edu

Visit the Caltech media relations web site: http://pr.caltech.edu/media

Writer: 
jp
Exclude from News Hub: 
Yes

Caltech astrophysicist Shrinivas Kulkarni electedto National Academy of Sciences

Shrinivas Kulkarni, who is the MacArthur Professor of Astronomy and Planetary Science at the California Institute of Technology, has been elected to the National Academy of Sciences.

Kulkarni is a leading authority on exotic astrophysical phenomena such as gamma-ray bursts, brown dwarfs, and millisecond pulsars, and has been associated with many of the major advances in understanding the universe that have been made over the last decade.

In 1982, along with Don Backer of UC Berkeley, Kulkarni discovered the first millisecond pulsar. These pulsars have turned out to be very precise natural clocks with many applications. In 1995, Kulkarni led a group that discovered the first "brown dwarf." Hypothesized since the sixties, a brown dwarf is a "failed star," with a mass too low to shine brightly like our own sun but too high for it to be classified as a planet. Brown dwarfs are now considered to be quite abundant. In 1997, he and his colleagues demonstrated that gamma-ray bursts were extragalactic in origin, and Kulkarni has led many investigations since then that have further uncovered the nature of the phenomenon.

Kulkarni has been a prime mover in the quest to improve the resolution of optical instruments with a technique known as "interferometry," which exploits the wave nature of light in such a way that light from two or more mirrors can be combined for a superior image. Working in collaboration with Jet Propulsion Laboratory engineers, his research team used the testbed interferometer at Caltech's Palomar Observatory in 2000 to obtain the most precise distance to date for a Cepheid variable, a type of regularly pulsating star that has long been a standard of reference in the "cosmic yardstick" used to gauge astronomical distances.

Kulkarni is heavily involved in the Keck Interferometer and is the interdisciplinary scientist for NASA's ambitious Space Interferometry Mission (SIM), which is expected to be launched in 2009. With SIM, astronomers hope to measure and catalog planets around nearby stars.

A Pasadena resident, Kulkarni earned his master's degree in 1978 from the Indian Institute of Technology and his doctorate from UC Berkeley in 1983. He came to Caltech in 1985 as a research fellow, and received a faculty appointment in 1987. He is also a former Presidential Young Investigator and Sloan Research Fellow, and winner of the Waterman Prize.

Kulkarni joins 71 other prominent scientists this year as new members, bringing the total active membership to 1,922. Caltech currently has 67 other faculty members and three trustees who are members of the academy.

Contact: Robert Tindol (626) 395-3631

 

Writer: 
RT
Exclude from News Hub: 
Yes

Astronomers find new evidence aboutuniverse's heaviest phase of star formation

New distance measurements from faraway galaxies further strengthen the view that the strongest burst of star formation in the universe occurred about two billion years after the Big Bang.

Reporting in the April 17 issue of the journal Nature, California Institute of Technology astronomers Scott Chapman and Andrew Blain, along with their United Kingdom colleagues Ian Smail and Rob Ivison, provide the redshifts of 10 extremely distant galaxies which strongly suggest that the most luminous galaxies ever detected were produced over a rather short period of time. Astronomers have long known that certain galaxies can be seen about a billion years after the Big Bang, but a relatively recent discovery of a type of extremely luminous galaxy -- one that is very faint in visible light, but much brighter at longer wavelengths -- is the key to the new results.

This type of galaxy was first found in 1997 using a new and much more sensitive camera for observing at submillimeter wavelengths (longer than the wavelengths of visible light that allows us to see, but somewhat shorter than radio waves). The camera was attached to the James Clerk Maxwell Telescope (JCMT), on Mauna Kea in Hawaii.

Submillimeter radiation is produced by warm galactic "dust" -- micron-sized solid particles similar to diesel soot that are interspersed between the stars in galaxies. Based on their unusual spectra, experts have thought it possible that these "submillimeter galaxies" could be found even closer in time to the Big Bang.

Because the JCMT cannot see details of the sky that are as fine as details seen by telescopes operating at visible and radio wavelengths, and because the submillimeter galaxies are very faint, researchers have had a hard time determining the precise locations of the submillimeter galaxies and measuring their distances. Without an accurate distance, it is difficult to tell how much energy such galaxies produce; and with no idea of how powerful they are, it is uncertain how important such galaxies are in the universe.

The new results combine the work of several instruments, including the Very Large Array in New Mexico (the world's most sensitive radio telescope), and one of the 10-meter telescopes at the W. M. Keck Observatory on Mauna Kea, which are the world's largest optical telescopes. These instruments first pinpointed the position of the submillimeter galaxies, and then measured their distances. Today's article in Nature reports the first 10 distances obtained.

The Keck telescope found the faint spectral signature of radiation that is emitted, at a single ultraviolet wavelength of 0.1215 micrometers, by hydrogen gas excited by either a large number of hot, young stars or by the energy released as matter spirals into a black hole at the core of a galaxy. The radiation is detected at a longer, redder wavelength, having been Doppler shifted by the rapid expansion of the universe while the light has been traveling to Earth.

All 10 of the submillimeter galaxies that were detected emitted the light that we see today when the universe was less than half its present age. The most distant produced its light only two billion years after the Big Bang (12 billion years ago). Thus, the submillimeter galaxies are now confirmed to be the most luminous type of galaxies in the universe, several hundred times more luminous than our Milky Way, and 10 trillion times more luminous than the sun.

It is likely that the formation of such extreme objects had to wait for a certain size of a galaxy to grow from an initially almost uniform universe and to become enriched with carbon, silicon, and oxygen from the first stars. The time when the submillimeter galaxies shone brightly can also provide information about how the sizes and makeup of galaxies developed at earlier times.

By detecting these galaxies, the Caltech astronomers have provided an accurate census of the most extreme galaxies in the universe at the peak of their activity and witnessed the most dramatic period of star buildup yet seen in the Milky Way and nearby galaxies. Now that their distances are known accurately, other measurements can be made to investigate the details of their power source, and to find out what galaxies will result when their intense bursts of activity come to an end.

James Clerk Maxwell Telescope is at http://www.jach.hawaii.edu/JACpublic/JCMT The Very Large Array is at http://www.aoc.nrao.edu/vla/html. Keck Observatory is at http:/www.astro.caltech.edu/mirror/keck/index.html

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Discovery of giant planar Hall effect could herald a generation of "spintronics" devices

A basic discovery in magnetic semiconductors could result in a new generation of devices for sensors and memory applications -- and perhaps, ultimately, quantum computation -- physicists from the California Institute of Technology and the University of California at Santa Barbara have announced.

The new phenomenon, called the giant planar Hall effect, has to do with what happens when the spins of current-carrying electrons are manipulated. For several years scientists have been engaged in exploiting electron spin for the creation of a new generation of electronic devices --hence the term "spintronics" -- and the Caltech-UCSB breakthrough offers a new route to realizing such devices.

The term "spintronics" is used instead of "electronics" because the technology is based on a new paradigm, says Caltech physics professor Michael Roukes. Rather than merely using an electric current to make them work, spintronic devices will also rely on the magnetic orientation (or spin) of the electrons themselves. "In regular semiconductors, the spin freedom of the electrical current carriers does not play a role," says Roukes. "But in the magnetic semiconductors we've studied, the spin polarization -- that is, the magnetism -- of electrical current carriers is highly ordered. Consequently, it can act as an important factor in determining the current flow in the electrical devices."

In the naturally unpolarized state, there is no particular order between one electron's spin and its neighbor's. If the spins are aligned, the result can be a change in resistance to current flow.

Such changes in resistance have long been known for metals, but the current research is the first time that semiconductor material has been constructed in such a way that spin-charge interaction is manifested as a very dramatic change in resistivity. The Caltech-UCSB team managed to accomplish this by carefully preparing a ferromagnetic semiconductor material made of gallium manganese arsenide (GaMnAs). The widely-used current technology employs sandwiched magnetic metal structures used for magnetic storage.

"You have much more freedom with semiconductors than metals for two reasons," Roukes explains. "First, semiconductor material can be made compatible with the mainstream of semiconductor electronics; and second, there are certain phenomena in semiconductors that have no analogies in metals."

Practical applications of spintronics will likely include new paradigms in information storage, due to the superiority of such semiconductor materials to the currently available dynamic random access memory (or DRAM) chips. This is because the semiconductor spintronics would be "nonvolatile," meaning that once the spins were aligned, the system would be as robust as a metal bar that has been permanently magnetized.

The spintronics semiconductors could also conceivably be used in magnetic logic to replace transistors as switches in certain applications. In other words, spin alignment would be used as a logic gate for faster circuits with lower energy usage.

Finally, the technology could possibly be improved so that the quantum states of the spins themselves might be used for logic gates in future quantum computers. Several research teams have quantum logic gates, but the setup is the size of an entire laboratory, rather than at chip scale, and therefore still unsuitable for device integration. By contrast, a spintronics-based device might be constructed as a solid-state system that could be integrated into microchips.

A full description of the Caltech-UCSB team's work appeared in the March 14 issue of Physical Review Letters [Tang et al, Vol 90, 107201 (2003)]. The article is available by subscription, but the main site can be accessed at http://prl.aps.org/. This discovery is also featured in the "News and Views" section of the forthcoming issue of Nature Materials.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Six Caltech Professors Awarded Sloan Research Fellowships

PASADENA, Calif.— Six Caltech professors recently received Alfred P. Sloan Research Fellowships for 2003.

The Caltech recipients in the field of chemistry are Paul David Asimow, assistant professor of geology and geochemistry, Linda C. Hsieh-Wilson, Jonas C. Peters, and Brian M. Stoltz, assistant professors of chemistry. In mathematics, a Sloan Fellowship was awarded to Danny Calegari, associate professor of mathematics, and in neuroscience, to Athanassios G. Siapas, assistant professor of computation and neural systems.

Each Sloan Fellow receives a grant of $40,000 for a two-year period. The grants of unrestricted funds are awarded to young researchers in the fields of physics, chemistry, computer science, mathematics, neuroscience, computational and evolutionary molecular biology, and economics. The grants are given to pursue diverse fields of inquiry and research, and to allow young scientists the freedom to establish their own independent research projects at a pivotal stage in their careers. The Sloan Fellows are selected on the basis of "their exceptional promise to contribute to the advancement of knowledge."

From over 500 nominees, a total of 117 young scientists and economists from 50 different colleges and universities in the United States and Canada, including Caltech's six, were selected to receive a Sloan Research Fellowship.

Twenty-eight former Sloan Fellows have received Nobel prizes.

"It is a terrific honor to receive this award and to be a part of such a tremendous tradition of excellence within the Sloan Foundation," said Stoltz. Asimow commented that he will use his Sloan Fellowship to "support further investigation into the presence of trace concentrations of water in the deep earth... I'm pleased because funds that are unattached to any particular grant are enormously useful for seeding new and high-risk projects that are not quite ready to turn into proposals." On his research, Peters said, "The Sloan award will provide invaluable seed money for work we've initiated in the past few months regarding nitrogen reduction using molecular iron systems."

The Alfred P. Sloan Research Fellowship program was established in 1955 by Alfred P. Sloan, Jr., who was the chief executive officer of General Motors for 23 years. Its objective is to encourage research by young scholars at a time in their careers when other support may be difficult to obtain. It is the oldest program of the Alfred P. Sloan Foundation and one of the oldest fellowship programs in the country.

Contact: Deborah Williams-Hedges (626) 395-3227 debwms@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media

###

Writer: 
DWH
Writer: 
Exclude from News Hub: 
Yes

Quick action by astronomers worldwide leadsto new insights on mysterious gamma-ray bursts

Scientists "arriving quickly on the scene" of an October 4 gamma-ray burst have announced that their rapid accumulation of data has provided new insights about this exotic astrophysical phenomenon. The researchers have seen, for the first time, ongoing energizing of the burst afterglow for more than half an hour after the initial explosion.

The findings support the "collapsar" model, in which the core of a star 15 times more massive than the sun collapses into a black hole. The black hole's spin, or magnetic fields, may be acting like a slingshot, flinging material into the surrounding debris.

The prompt observation—and by far the most detailed to date—was made possible by several ground- and space-based observatories operating in tandem. The blast was initially detected by NASA's High-Energy Transient Explorer (HETE) satellite, and follow-up observations were quickly undertaken using ground-based robotic telescopes and fast-thinking researchers around the globe. The results are reported in the March 20 issue of the journal Nature.

"If a gamma-ray burst is the birth cry of a black hole, then the HETE satellite has just allowed us into the delivery room," said Derek Fox, a postdoctoral researcher at the California Institute of Technology and lead author of the Nature paper. Fox discovered the afterglow, or glowing embers of the burst, using the Oschin 48-inch telescope located at Caltech's Palomar Observatory.

Gamma-ray bursts shine hundreds of times brighter than a supernova, or as bright as a million trillion suns. The mysterious bursts are common, yet random and fleeting. The gamma-ray portion of a burst typically lasts from a few milliseconds to a couple of minutes. An afterglow, caused by shock waves from the explosion sweeping up matter and ramming it into the region around the burst, can linger for much longer, releasing energy in X rays, visible light, and radio waves. It is from the studies of such afterglows that astronomers can hope to learn more about the origins and nature of these extreme cosmic explosions.

This gamma-ray burst, called GRB021004, appeared on October 4, 2002, at 8:06 a.m. EDT. Seconds after HETE detected the burst, an e-mail providing accurate coordinates was sent to observatories around the world, including Caltech's Palomar Observatory. Fox pinpointed the afterglow shortly afterward from images captured by the Oschin Telescope within minutes of the burst, and notified the astronomical community through a rapid e-mail system operated by NASA for the follow-up studies of gamma-ray bursts. Then the race was on, as scientists in California, across the Pacific, Australia, Asia, and Europe employed more than 50 telescopes to zoom in on the afterglow before the approaching sunrise.

At about the same time, the afterglow was detected by the Automated Response Telescope (ART) in Japan, a 20-centimeter instrument located in Wako, a Tokyo suburb, and operated by the Japanese research institute RIKEN. The ART started observing the region a mere 193 seconds after the burst, but it took a few days for these essential observations to be properly analyzed and distributed to the astronomical community.

Analysis of these rapid observations produced a surprise: fluctuations in brightness, which scientists interpreted as the evidence for a continued injection of energy into the afterglow, well after the burst occurred. According to Shri Kulkarni, who is the McArthur Professor of Astronomy and Planetary Science at Caltech, the newly observed energizing of the burst afterglow indicates that the power must have been provided by whatever object produced the gamma-ray burst itself.

"This ongoing energy shows that the explosion is not a simple, one-time event, but that the central source lives for a longer time," said Kulkarni, a co-author of the Nature paper. "This is bringing us closer to a full understanding of these remarkable cosmic flashes."

Added Fox, "In the past we used to be impressed by the energy release in gamma-rays alone. These explosions appear to be more energetic than meets the eye."

Later radio observations undertaken at the Very Large Array in New Mexico and other radio telescopes, including Caltech's Owens Valley Radio Observatory and the IRAM millimeter telescope in France, lend further support to the idea that the explosions continued increasing in energy. "Whatever monster created this burst just refused to die quietly," said D. A. Frail, co-author and a staff astronomer at the Very Large Array.

Fox and his colleagues relied on data from the RIKEN telescope, in Japan, and from the Palomar Oschin Telescope and its Near Earth Asteroid Tracking (NEAT) camera, an instrument that has been roboticized and is currently managed by a team of astronomers at JPL led by Steven Pravdo. The collaboration of the Caltech astronomers and the NEAT team has proven extremely fruitful for the global astronomical community, helping to identify fully 25 percent of the afterglows discovered worldwide since Fox retrofitted the telescope software for this new task in the autumn of 2001.

HETE is the first satellite to provide and distribute accurate burst locations within seconds. The principal investigator for the HETE satellite is George Ricker of the Massachussetts Institute of Technology. HETE was built as a "mission of opportunity" under the NASA Explorer Program, a collaboration among U.S. universities, Los Alamos National Laboratory, and scientists and organizations in Brazil, France, India, Italy, and Japan.

###

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Caltech computer scientists develop FAST protocol to speed up Internet

Caltech computer scientists have developed a new data transfer protocol for the Internet fast enough to download a full-length DVD movie in less than five seconds.

The protocol is called FAST, standing for Fast Active queue management Scalable Transmission Control Protocol (TCP). The researchers have achieved a speed of 8,609 megabits per second (Mbps) by using 10 simultaneous flows of data over routed paths, the largest aggregate throughput ever accomplished in such a configuration. More importantly, the FAST protocol sustained this speed using standard packet size, stably over an extended period on shared networks in the presence of background traffic, making it adaptable for deployment on the world's high-speed production networks.

The experiment was performed last November during the Supercomputing Conference in Baltimore, by a team from Caltech and the Stanford Linear Accelerator Center (SLAC), working in partnership with the European Organization for Nuclear Research (CERN), and the organizations DataTAG, StarLight, TeraGrid, Cisco, and Level(3).

The FAST protocol was developed in Caltech's Networking Lab, led by Steven Low, associate professor of computer science and electrical engineering. It is based on theoretical work done in collaboration with John Doyle, a professor of control and dynamical systems, electrical engineering, and bioengineering at Caltech, and Fernando Paganini, associate professor of electrical engineering at UCLA. It builds on work from a growing community of theoreticians interested in building a theoretical foundation of the Internet, an effort in which Caltech has been playing a leading role.

Harvey Newman, a professor of physics at Caltech, said the fast protocol "represents a milestone for science, for grid systems, and for the Internet."

"Rapid and reliable data transport, at speeds of one to 10 Gbps and 100 Gbps in the future, is a key enabler of the global collaborations in physics and other fields," Newman said. "The ability to extract, transport, analyze and share many Terabyte-scale data collections is at the heart of the process of search and discovery for new scientific knowledge. The FAST results show that the high degree of transparency and performance of networks, assumed implicitly by Grid systems, can be achieved in practice. In a broader context, the fact that 10 Gbps wavelengths can be used efficiently to transport data at maximum speed end to end will transform the future concepts of the Internet."

Les Cottrell of SLAC, added that progress in speeding up data transfers over long distance are critical to progress in various scientific endeavors. "These include sciences such as high-energy physics and nuclear physics, astronomy, global weather predictions, biology, seismology, and fusion; and industries such as aerospace, medicine, and media distribution.

"Today, these activities often are forced to share their data using literally truck or plane loads of data," Cottrell said. "Utilizing the network can dramatically reduce the delays and automate today's labor intensive procedures."

The ability to demonstrate efficient high performance throughput using commercial off the shelf hardware and applications, standard Internet packet sizes supported throughput today's networks, and requiring modifications to the ubiquitous TCP protocol only at the data sender, is an important achievement.

With Internet speeds doubling roughly annually, we can expect the performances demonstrated by this collaboration to become commonly available in the next few years, so the demonstration is important to set expectations, for planning, and to indicate how to utilize such speeds.

The testbed used in the Caltech/SLAC experiment was the culmination of a multi-year effort, led by Caltech physicist Harvey Newman's group on behalf of the international high energy and nuclear physics (HENP) community, together with CERN, SLAC, Caltech Center for Advanced Computing Research (CACR), and other organizations. It illustrates the difficulty, ingenuity and importance of organizing and implementing leading edge global experiments. HENP is one of the principal drivers and co-developers of global research networks. One unique aspect of the HENP testbed is the close coupling between R&D and production, where the protocols and methods implemented in each R&D cycle are targeted, after a relatively short time delay, for widespread deployment across production networks to meet the demanding needs of data intensive science.

The congestion control algorithm of the current Internet was designed in 1988 when the Internet could barely carry a single uncompressed voice call. The problem today is that this algorithm cannot scale to anticipated future needs, when the networks will be compelled to carry millions of uncompressed voice calls on a single path or support major science experiments that require the on-demand rapid transport of gigabyte to terabyte data sets drawn from multi-petabyte data stores. This protocol problem has prompted several interim remedies, such as using nonstandard packet sizes or aggressive algorithms that can monopolize network resources to the detriment of other users. Despite years of effort, these measures have proved to be ineffective or difficult to deploy.

They are, however, critical steps in our evolution toward ultrascale networks. Sustaining high performance on a global network is extremely challenging and requires concerted advances in both hardware and protocols. Experiments that achieve high throughput either in isolated environments or using interim remedies that by-pass protocol instability, idealized or fragile as they may be, push the state of the art in hardware and demonstrates its performance limit. Development of robust and practical protocols will then allow us to make effective use of the most advanced hardware to achieve ideal performance in realistic environments.

The FAST team addresses the protocol issues head-on to develop a variant of TCP that can scale to a multi-gigabit-per-second regime in practical network conditions. The integrated approach that combines theory, implementation, and experiment is what makes their research unique and fundamental progress possible.

Using standard packet size that is supported throughout today's networks, the current TCP typically achieves an average throughput of 266 Mbps, averaged over an hour, with a single TCP/IP flow between Sunnyvale near SLAC and CERN in Geneva, over a distance of 10,037 kilometers. This represents an efficiency of just 27 percent. The FAST TCP sustained an average throughput of 925 Mbps and an efficiency of 95 percent, a 3.5-times improvement, under the same experimental condition. With 10 concurrent TCP/IP flows, FAST achieved an unprecedented speed of 8,609 Mbps, at 88 percent efficiency, that is 153,000 times that of today's modem and close to 6,000 times that of the common standard for ADSL (Asymmetric Digital Subscriber Line) connections.

The 10-flow experiment sets another first in addition to the highest aggregate speed over routed paths. It is the combination of high capacity and large distance that causes performance problems. Different TCP algorithms can be compared using the product of achieved throughput and the distance of transfer, measured in bit-meter-per-second, or bmps. The world record for the current TCP is 10 peta (1 followed by 16 zeros) bmps, using a nonstandard packet size. The Caltech/SLAC experiment transferred 21 terabytes over six hours between Baltimore and Sunnyvale using standard packet size, achieving 34 peta bmps. Moreover, data was transferred over shared research networks in the presence of background traffic, suggesting that FAST can be backward compatible with the current protocol. The FAST team has started to work with various groups around the world to explore testing and deploying FAST TCP in communities that need multi-Gbps networking urgently.

The demonstrations used a 10 Gbps link donated by Level(3) between StarLight (Chicago) and Sunnyvale, as well as the DataTAG 2.5 Gbps link between StarLight and CERN, the Abilene backbone of Internet2, and the TeraGrid facility. The network routers and switches at StarLight and CERN were used together with a GSR 12406 router loaned by Cisco at Sunnyvale, additional Cisco modules loaned at StarLight, and sets of dual Pentium 4 servers each with dual Gigabit Ethernet connections at StarLight, Sunnyvale, CERN, and the SC2002 show floor provided by Caltech, SLAC, and CERN. The project is funded by the National Science Foundation, the Department of Energy, the European Commission, and the Caltech Lee Center for Advanced Networking.

One of the drivers of these developments has been the HENP community, whose explorations at the high-energy frontier are breaking new ground in our understanding of the fundamental interactions, structures and symmetries that govern the nature of matter and space-time in our universe. The largest HENP projects each encompasses 2,000 physicists from 150 universities and laboratories in more than 30 countries.

Rapid and reliable data transport, at speeds of 1 to 10 Gbps and 100 Gbps in the future, is a key enabler of the global collaborations in physics and other fields. The ability to analyze and share many terabyte-scale data collections, accessed and transported in minutes, on the fly, rather than over hours or days as is the current practice, is at the heart of the process of search and discovery for new scientific knowledge. Caltech's FAST protocol shows that the high degree of transparency and performance of networks, assumed implicitly by Grid systems, can be achieved in practice.

This will drive scientific discovery and utilize the world's growing bandwidth capacity much more efficiently than has been possible until now.

Writer: 
RT

Nanodevice breaks 1-GHz barrier

Nanoscientists have achieved a milestone in their burgeoning field by creating a device that vibrates a billion times per second, or at one gigahertz (1 GHz). The accomplishment further increases the likelihood that tiny mechanical devices working at the quantum level can someday supplement electronic devices for new products.

Reporting in the January 30 issue of the journal Nature, California Institute of Technology professor of physics, applied physics, and bioengineering Michael Roukes and his colleagues from Caltech and Case Western Reserve University demonstrate that the tiny mechanism operates at microwave frequencies. The device is a prototype and not yet developed to the point that it is ready to be integrated into a commercial application; nevertheless, it demonstrates the progress being made in the quest to turn nanotechnology into a reality—that is, to make useful devices whose dimensions are less than a millionth of a meter.

This latest effort in the field of NEMS, which is an acronym for "nanoelectromechanical systems," is part of a larger, emerging effort to produce mechanical devices for sensitive force detection and high-frequency signal processing. According to Roukes, the technology could also have implications for new and improved biological imaging and, ultimately, for observing individual molecules through an improved approach to magnetic resonance spectroscopy, as well as for a new form of mass spectrometry that may permit single molecules to be "fingerprinted" by their mass.

"When we think of microelectronics today, we think about moving charges around on chips," says Roukes. "We can do this at high rates of speed, but in this electronic age our mind-set has been somewhat tyrannized in that we typically think of electronic devices as involving only the movement of charge.

"But since 1992, we've been trying to push mechanical devices to ever-smaller dimensions, because as you make things smaller, there's less inertia in getting them to move. So the time scales for inducing mechanical response go way down."

Though a good home computer these days can have a speed of one gigahertz or more, the quest to construct a mechanical device that can operate at such speeds has required multiple breakthroughs in manufacturing technology. In the case of the Roukes group's new demonstration, the use of silicon carbide epilayers to control layer thickness to atomic dimensions and a balanced high-frequency technique for sensing motion that effectively transfers signals to macroscale circuitry have been crucial to success. Both advances were pioneered in the Roukes lab.

Grown on silicon wafers, the films used in the work are prepared in such a way that the end-products are two nearly-identical beams 1.1 microns long, 120 nanometers wide and 75 nanometers thick. When driven by a microwave-frequency electric current while exposed to a strong magnetic field, the beams mechanically vibrate at slightly more than one gigahertz.

Future work will include improving the nanodevices to better link their mechanical function to real-world applications, Roukes says. The issue of communicating information, or measurements, from the nanoworld to the everyday world we live in is by no means a trivial matter. As devices become smaller, it becomes increasingly difficult to recognize the very small displacements that occur at much shorter time-scales.

Progress with the nanoelectromechanical system working at microwave frequencies offer the potential for improving magnetic resonance imaging to the extent that individual macromolecules could be imaged. This would be especially important in furthering the understanding of the relationship between, for example, the structure and function of proteins. Also, the devices could be used in a novel form of mass spectrometry, and for sensing individual biomolecules in fluids, and perhaps for realizing solid-state manifestations of the quantum bit that could be exploited for future devices such as quantum computers.

The coauthors of the paper are Xue-Ming (Henry) Huang, a graduate student in physics at Caltech; and Chris Zorman and Mehran Mehrengany, both engineering professors at Case Western Reserve University.

Contact:Robert Tindol (626) 395-3631

Writer: 
RT

Earthbound experiment confirms theory accounting for sun's scarcity of neutrinos

PASADENA, Calif.- In the subatomic particle family, the neutrino is a bit like a wayward red-haired stepson. Neutrinos were long ago detected-and even longer ago predicted to exist-but everything physicists know about nuclear processes says there should be a certain number of neutrinos streaming from the sun, yet there are nowhere near enough.

This week, an international team has revealed that the sun's lack of neutrinos is a real phenomenon, probably explainable by conventional theories of quantum mechanics, and not merely an observational quirk or something unknown about the sun's interior. The team, which includes experimental particle physicist Robert McKeown of the California Institute of Technology, bases its observations on experiments involving nuclear power plants in Japan.

The project is referred to as KamLAND because the neutrino detector is located at the Kamioka mine in Japan. Properly shielded from radiation from background and cosmic sources, the detector is optimized for measuring the neutrinos from all 17 nuclear power plants in the country.

Neutrinos are produced in the nuclear fusion process, when two protons fuse together to form deuterium, a positron (in other words, the positively charged antimatter equivalent of an electron), and a neutrino. The deuterium nucleus hangs nearby, while the positron eventually annihilates both itself and an electron. The neutrino, being very unlikely to interact with matter, streams away into space.

Therefore, physicists would normally expect neutrinos to flow from the sun in much the same way that photons flow from a light bulb. In the case of the light bulb, the photons (or bundles of light energy) are thrown out radially and evenly, as if the surface of a surrounding sphere were being illuminated. And because the surface area of a sphere increases by the square of the distance, an observer standing 20 feet away sees only one-fourth the photons of an observer standing at 10 feet.

Thus, observers on Earth expect to see a given number of neutrinos coming from the sun-assuming they know how many nuclear reactions are going on in the sun-just as they expect to know the luminosity of a light bulb at a given distance if they know the bulb's wattage. But such has not been the case. Carefully constructed experiments for detecting the elusive neutrinos have shown that there are far fewer neutrinos than there should be.

A theoretical explanation for this neutrino deficit is that the neutrino "flavor" oscillates between the detectable "electron" neutrino type, and the much heavier "muon" neutrino and maybe even the "tau" neutrino, neither of which can be detected. Utilizing quantum mechanics, physicists estimate that the number of detectable electron neutrinos is constantly changing in a steady rhythm from 100 percent down to a small percentage and back again.

Therefore, the theory says that the reason we see only about half as many neutrinos from the sun as we should be seeing is because, outside the sun, about half the electron neutrinos are at that moment one of the undetectable flavors.

The triumph of the KamLAND experiment is that physicists for the first time can observe neutrino oscillations without making assumptions about the properties of the source of neutrinos. Because the nuclear power plants have a very precisely known amount of material generating the particles, it is much easier to determine with certainty whether the oscillations are real or not.

Actually, the fission process of the nuclear plants is different from the process in the sun in that the nuclear material breaks apart to form two smaller atoms, plus an electron and an antineutrino (the antimatter equivalent of a neutrino). But matter and antimatter are thought to be mirror-images of each other, so the study of antineutrinos from the beta-decays of the nuclear power plants should be exactly the same as a study of neutrinos.

"This is really a clear demonstration of neutrino disappearance," says McKeown. "Granted, the laboratory is pretty big-it's Japan-but at least the experiment doesn't require the observer to puzzle over the composition of astrophysical sources.

"Willy Fowler [the late Nobel Prize-winning Caltech physicist] always said it's better to know the physics to explain the astrophysics, rather than vice versa," McKeown says. "This experiment allows us to study the neutrino in a controlled experiment."

The results announced this week are taken from 145 days of data. The researchers detected 54 events during that time (an event being a collision of an antineutrino with a proton to form a neutron and positron, ultimately resulting in a flash of light that could be measured with photon detectors). Theory predicted that about 87 antineutrinos would have been seen during that time, if no oscillations occurred, but 54 events at an average distance of 175 kilometers if the oscillation is a real phenomenon.

According to McKeown, the experiment will run about three to five years, with experimentalists ultimately collecting data for several hundred events. The additional information should provide very accurate measurements of the energy spectrum predicted by theory when the neutrinos oscillate.

The experiment may also catch neutrinos if any supernovae occur in our galaxy, as well as neutrinos from natural events in Earth's interior.

In addition to McKeown's team at Caltech's Kellogg Radiation Lab, other partners in the study include the Research Center for Neutrino Science at Tohuku University in Japan, the University of Alabama, the University of California at Berkeley and the Lawrence Berkeley National Laboratory, Drexel University, the University of Hawaii, the University of New Mexico, Louisiana State University, Stanford University, the University of Tennessee, Triangle Universities Nuclear Laboratory, and the Institute of High Energy Physics in Beijing.

The project is supported in part by the U.S. Department of Energy.

 

 

Writer: 
Jill Perry
Writer: 

Pages

Subscribe to RSS - PMA