Caltech computer scientists develop FAST protocol to speed up Internet

Caltech computer scientists have developed a new data transfer protocol for the Internet fast enough to download a full-length DVD movie in less than five seconds.

The protocol is called FAST, standing for Fast Active queue management Scalable Transmission Control Protocol (TCP). The researchers have achieved a speed of 8,609 megabits per second (Mbps) by using 10 simultaneous flows of data over routed paths, the largest aggregate throughput ever accomplished in such a configuration. More importantly, the FAST protocol sustained this speed using standard packet size, stably over an extended period on shared networks in the presence of background traffic, making it adaptable for deployment on the world's high-speed production networks.

The experiment was performed last November during the Supercomputing Conference in Baltimore, by a team from Caltech and the Stanford Linear Accelerator Center (SLAC), working in partnership with the European Organization for Nuclear Research (CERN), and the organizations DataTAG, StarLight, TeraGrid, Cisco, and Level(3).

The FAST protocol was developed in Caltech's Networking Lab, led by Steven Low, associate professor of computer science and electrical engineering. It is based on theoretical work done in collaboration with John Doyle, a professor of control and dynamical systems, electrical engineering, and bioengineering at Caltech, and Fernando Paganini, associate professor of electrical engineering at UCLA. It builds on work from a growing community of theoreticians interested in building a theoretical foundation of the Internet, an effort in which Caltech has been playing a leading role.

Harvey Newman, a professor of physics at Caltech, said the fast protocol "represents a milestone for science, for grid systems, and for the Internet."

"Rapid and reliable data transport, at speeds of one to 10 Gbps and 100 Gbps in the future, is a key enabler of the global collaborations in physics and other fields," Newman said. "The ability to extract, transport, analyze and share many Terabyte-scale data collections is at the heart of the process of search and discovery for new scientific knowledge. The FAST results show that the high degree of transparency and performance of networks, assumed implicitly by Grid systems, can be achieved in practice. In a broader context, the fact that 10 Gbps wavelengths can be used efficiently to transport data at maximum speed end to end will transform the future concepts of the Internet."

Les Cottrell of SLAC, added that progress in speeding up data transfers over long distance are critical to progress in various scientific endeavors. "These include sciences such as high-energy physics and nuclear physics, astronomy, global weather predictions, biology, seismology, and fusion; and industries such as aerospace, medicine, and media distribution.

"Today, these activities often are forced to share their data using literally truck or plane loads of data," Cottrell said. "Utilizing the network can dramatically reduce the delays and automate today's labor intensive procedures."

The ability to demonstrate efficient high performance throughput using commercial off the shelf hardware and applications, standard Internet packet sizes supported throughput today's networks, and requiring modifications to the ubiquitous TCP protocol only at the data sender, is an important achievement.

With Internet speeds doubling roughly annually, we can expect the performances demonstrated by this collaboration to become commonly available in the next few years, so the demonstration is important to set expectations, for planning, and to indicate how to utilize such speeds.

The testbed used in the Caltech/SLAC experiment was the culmination of a multi-year effort, led by Caltech physicist Harvey Newman's group on behalf of the international high energy and nuclear physics (HENP) community, together with CERN, SLAC, Caltech Center for Advanced Computing Research (CACR), and other organizations. It illustrates the difficulty, ingenuity and importance of organizing and implementing leading edge global experiments. HENP is one of the principal drivers and co-developers of global research networks. One unique aspect of the HENP testbed is the close coupling between R&D and production, where the protocols and methods implemented in each R&D cycle are targeted, after a relatively short time delay, for widespread deployment across production networks to meet the demanding needs of data intensive science.

The congestion control algorithm of the current Internet was designed in 1988 when the Internet could barely carry a single uncompressed voice call. The problem today is that this algorithm cannot scale to anticipated future needs, when the networks will be compelled to carry millions of uncompressed voice calls on a single path or support major science experiments that require the on-demand rapid transport of gigabyte to terabyte data sets drawn from multi-petabyte data stores. This protocol problem has prompted several interim remedies, such as using nonstandard packet sizes or aggressive algorithms that can monopolize network resources to the detriment of other users. Despite years of effort, these measures have proved to be ineffective or difficult to deploy.

They are, however, critical steps in our evolution toward ultrascale networks. Sustaining high performance on a global network is extremely challenging and requires concerted advances in both hardware and protocols. Experiments that achieve high throughput either in isolated environments or using interim remedies that by-pass protocol instability, idealized or fragile as they may be, push the state of the art in hardware and demonstrates its performance limit. Development of robust and practical protocols will then allow us to make effective use of the most advanced hardware to achieve ideal performance in realistic environments.

The FAST team addresses the protocol issues head-on to develop a variant of TCP that can scale to a multi-gigabit-per-second regime in practical network conditions. The integrated approach that combines theory, implementation, and experiment is what makes their research unique and fundamental progress possible.

Using standard packet size that is supported throughout today's networks, the current TCP typically achieves an average throughput of 266 Mbps, averaged over an hour, with a single TCP/IP flow between Sunnyvale near SLAC and CERN in Geneva, over a distance of 10,037 kilometers. This represents an efficiency of just 27 percent. The FAST TCP sustained an average throughput of 925 Mbps and an efficiency of 95 percent, a 3.5-times improvement, under the same experimental condition. With 10 concurrent TCP/IP flows, FAST achieved an unprecedented speed of 8,609 Mbps, at 88 percent efficiency, that is 153,000 times that of today's modem and close to 6,000 times that of the common standard for ADSL (Asymmetric Digital Subscriber Line) connections.

The 10-flow experiment sets another first in addition to the highest aggregate speed over routed paths. It is the combination of high capacity and large distance that causes performance problems. Different TCP algorithms can be compared using the product of achieved throughput and the distance of transfer, measured in bit-meter-per-second, or bmps. The world record for the current TCP is 10 peta (1 followed by 16 zeros) bmps, using a nonstandard packet size. The Caltech/SLAC experiment transferred 21 terabytes over six hours between Baltimore and Sunnyvale using standard packet size, achieving 34 peta bmps. Moreover, data was transferred over shared research networks in the presence of background traffic, suggesting that FAST can be backward compatible with the current protocol. The FAST team has started to work with various groups around the world to explore testing and deploying FAST TCP in communities that need multi-Gbps networking urgently.

The demonstrations used a 10 Gbps link donated by Level(3) between StarLight (Chicago) and Sunnyvale, as well as the DataTAG 2.5 Gbps link between StarLight and CERN, the Abilene backbone of Internet2, and the TeraGrid facility. The network routers and switches at StarLight and CERN were used together with a GSR 12406 router loaned by Cisco at Sunnyvale, additional Cisco modules loaned at StarLight, and sets of dual Pentium 4 servers each with dual Gigabit Ethernet connections at StarLight, Sunnyvale, CERN, and the SC2002 show floor provided by Caltech, SLAC, and CERN. The project is funded by the National Science Foundation, the Department of Energy, the European Commission, and the Caltech Lee Center for Advanced Networking.

One of the drivers of these developments has been the HENP community, whose explorations at the high-energy frontier are breaking new ground in our understanding of the fundamental interactions, structures and symmetries that govern the nature of matter and space-time in our universe. The largest HENP projects each encompasses 2,000 physicists from 150 universities and laboratories in more than 30 countries.

Rapid and reliable data transport, at speeds of 1 to 10 Gbps and 100 Gbps in the future, is a key enabler of the global collaborations in physics and other fields. The ability to analyze and share many terabyte-scale data collections, accessed and transported in minutes, on the fly, rather than over hours or days as is the current practice, is at the heart of the process of search and discovery for new scientific knowledge. Caltech's FAST protocol shows that the high degree of transparency and performance of networks, assumed implicitly by Grid systems, can be achieved in practice.

This will drive scientific discovery and utilize the world's growing bandwidth capacity much more efficiently than has been possible until now.

Writer: 
RT

Nanodevice breaks 1-GHz barrier

Nanoscientists have achieved a milestone in their burgeoning field by creating a device that vibrates a billion times per second, or at one gigahertz (1 GHz). The accomplishment further increases the likelihood that tiny mechanical devices working at the quantum level can someday supplement electronic devices for new products.

Reporting in the January 30 issue of the journal Nature, California Institute of Technology professor of physics, applied physics, and bioengineering Michael Roukes and his colleagues from Caltech and Case Western Reserve University demonstrate that the tiny mechanism operates at microwave frequencies. The device is a prototype and not yet developed to the point that it is ready to be integrated into a commercial application; nevertheless, it demonstrates the progress being made in the quest to turn nanotechnology into a reality—that is, to make useful devices whose dimensions are less than a millionth of a meter.

This latest effort in the field of NEMS, which is an acronym for "nanoelectromechanical systems," is part of a larger, emerging effort to produce mechanical devices for sensitive force detection and high-frequency signal processing. According to Roukes, the technology could also have implications for new and improved biological imaging and, ultimately, for observing individual molecules through an improved approach to magnetic resonance spectroscopy, as well as for a new form of mass spectrometry that may permit single molecules to be "fingerprinted" by their mass.

"When we think of microelectronics today, we think about moving charges around on chips," says Roukes. "We can do this at high rates of speed, but in this electronic age our mind-set has been somewhat tyrannized in that we typically think of electronic devices as involving only the movement of charge.

"But since 1992, we've been trying to push mechanical devices to ever-smaller dimensions, because as you make things smaller, there's less inertia in getting them to move. So the time scales for inducing mechanical response go way down."

Though a good home computer these days can have a speed of one gigahertz or more, the quest to construct a mechanical device that can operate at such speeds has required multiple breakthroughs in manufacturing technology. In the case of the Roukes group's new demonstration, the use of silicon carbide epilayers to control layer thickness to atomic dimensions and a balanced high-frequency technique for sensing motion that effectively transfers signals to macroscale circuitry have been crucial to success. Both advances were pioneered in the Roukes lab.

Grown on silicon wafers, the films used in the work are prepared in such a way that the end-products are two nearly-identical beams 1.1 microns long, 120 nanometers wide and 75 nanometers thick. When driven by a microwave-frequency electric current while exposed to a strong magnetic field, the beams mechanically vibrate at slightly more than one gigahertz.

Future work will include improving the nanodevices to better link their mechanical function to real-world applications, Roukes says. The issue of communicating information, or measurements, from the nanoworld to the everyday world we live in is by no means a trivial matter. As devices become smaller, it becomes increasingly difficult to recognize the very small displacements that occur at much shorter time-scales.

Progress with the nanoelectromechanical system working at microwave frequencies offer the potential for improving magnetic resonance imaging to the extent that individual macromolecules could be imaged. This would be especially important in furthering the understanding of the relationship between, for example, the structure and function of proteins. Also, the devices could be used in a novel form of mass spectrometry, and for sensing individual biomolecules in fluids, and perhaps for realizing solid-state manifestations of the quantum bit that could be exploited for future devices such as quantum computers.

The coauthors of the paper are Xue-Ming (Henry) Huang, a graduate student in physics at Caltech; and Chris Zorman and Mehran Mehrengany, both engineering professors at Case Western Reserve University.

Contact:Robert Tindol (626) 395-3631

Writer: 
RT

Earthbound experiment confirms theory accounting for sun's scarcity of neutrinos

PASADENA, Calif.- In the subatomic particle family, the neutrino is a bit like a wayward red-haired stepson. Neutrinos were long ago detected-and even longer ago predicted to exist-but everything physicists know about nuclear processes says there should be a certain number of neutrinos streaming from the sun, yet there are nowhere near enough.

This week, an international team has revealed that the sun's lack of neutrinos is a real phenomenon, probably explainable by conventional theories of quantum mechanics, and not merely an observational quirk or something unknown about the sun's interior. The team, which includes experimental particle physicist Robert McKeown of the California Institute of Technology, bases its observations on experiments involving nuclear power plants in Japan.

The project is referred to as KamLAND because the neutrino detector is located at the Kamioka mine in Japan. Properly shielded from radiation from background and cosmic sources, the detector is optimized for measuring the neutrinos from all 17 nuclear power plants in the country.

Neutrinos are produced in the nuclear fusion process, when two protons fuse together to form deuterium, a positron (in other words, the positively charged antimatter equivalent of an electron), and a neutrino. The deuterium nucleus hangs nearby, while the positron eventually annihilates both itself and an electron. The neutrino, being very unlikely to interact with matter, streams away into space.

Therefore, physicists would normally expect neutrinos to flow from the sun in much the same way that photons flow from a light bulb. In the case of the light bulb, the photons (or bundles of light energy) are thrown out radially and evenly, as if the surface of a surrounding sphere were being illuminated. And because the surface area of a sphere increases by the square of the distance, an observer standing 20 feet away sees only one-fourth the photons of an observer standing at 10 feet.

Thus, observers on Earth expect to see a given number of neutrinos coming from the sun-assuming they know how many nuclear reactions are going on in the sun-just as they expect to know the luminosity of a light bulb at a given distance if they know the bulb's wattage. But such has not been the case. Carefully constructed experiments for detecting the elusive neutrinos have shown that there are far fewer neutrinos than there should be.

A theoretical explanation for this neutrino deficit is that the neutrino "flavor" oscillates between the detectable "electron" neutrino type, and the much heavier "muon" neutrino and maybe even the "tau" neutrino, neither of which can be detected. Utilizing quantum mechanics, physicists estimate that the number of detectable electron neutrinos is constantly changing in a steady rhythm from 100 percent down to a small percentage and back again.

Therefore, the theory says that the reason we see only about half as many neutrinos from the sun as we should be seeing is because, outside the sun, about half the electron neutrinos are at that moment one of the undetectable flavors.

The triumph of the KamLAND experiment is that physicists for the first time can observe neutrino oscillations without making assumptions about the properties of the source of neutrinos. Because the nuclear power plants have a very precisely known amount of material generating the particles, it is much easier to determine with certainty whether the oscillations are real or not.

Actually, the fission process of the nuclear plants is different from the process in the sun in that the nuclear material breaks apart to form two smaller atoms, plus an electron and an antineutrino (the antimatter equivalent of a neutrino). But matter and antimatter are thought to be mirror-images of each other, so the study of antineutrinos from the beta-decays of the nuclear power plants should be exactly the same as a study of neutrinos.

"This is really a clear demonstration of neutrino disappearance," says McKeown. "Granted, the laboratory is pretty big-it's Japan-but at least the experiment doesn't require the observer to puzzle over the composition of astrophysical sources.

"Willy Fowler [the late Nobel Prize-winning Caltech physicist] always said it's better to know the physics to explain the astrophysics, rather than vice versa," McKeown says. "This experiment allows us to study the neutrino in a controlled experiment."

The results announced this week are taken from 145 days of data. The researchers detected 54 events during that time (an event being a collision of an antineutrino with a proton to form a neutron and positron, ultimately resulting in a flash of light that could be measured with photon detectors). Theory predicted that about 87 antineutrinos would have been seen during that time, if no oscillations occurred, but 54 events at an average distance of 175 kilometers if the oscillation is a real phenomenon.

According to McKeown, the experiment will run about three to five years, with experimentalists ultimately collecting data for several hundred events. The additional information should provide very accurate measurements of the energy spectrum predicted by theory when the neutrinos oscillate.

The experiment may also catch neutrinos if any supernovae occur in our galaxy, as well as neutrinos from natural events in Earth's interior.

In addition to McKeown's team at Caltech's Kellogg Radiation Lab, other partners in the study include the Research Center for Neutrino Science at Tohuku University in Japan, the University of Alabama, the University of California at Berkeley and the Lawrence Berkeley National Laboratory, Drexel University, the University of Hawaii, the University of New Mexico, Louisiana State University, Stanford University, the University of Tennessee, Triangle Universities Nuclear Laboratory, and the Institute of High Energy Physics in Beijing.

The project is supported in part by the U.S. Department of Energy.

 

 

Writer: 
Jill Perry
Writer: 

Caltech astronomer Jesse Greenstein dies; was early investigator of quasars, white dwarfs

Jesse L. Greenstein, an astrophysicist whose many accomplishments included seminal work on the nature of quasars, died Monday, October 21, 2002, three days after falling and breaking his hip. He was 93.

A native of New York City, Greenstein grew up in a family that actively encouraged his scientific interests. At the age of eight he received a brass telescope from his grandfather—not an unusual gift for an American child, but Greenstein soon was also experimenting in earnest with his own prism spectroscope, an arc, a rotary spark, a rectifier, and a radio transmitter. With the spectroscope he began his lifelong interest in identifying the composition of materials, a passion that would lead to his becoming a worldwide authority on the evolution and composition of stars.

Greenstein entered the Horace Mann School for Boys at the age of 11, and by 16 was a student at Harvard University. After earning his bachelor's degree in 1929 and his master's in 1930, he decided that it would be prudent, in the depths of the Great Depression, to join the family's real estate and finance business in New York. But by 1934 he was back at Harvard, earning his doctorate in 1937.

Greenstein won a National Research Council Fellowship in 1937, which allowed a certain amount of latitude in his place of employment. With the stipend, he chose to join the University of Chicago's Yerkes Observatory at Williams Bay, Wisconsin, remaining there for the duration of the two-year fellowship. In 1939 he joined the University of Chicago astrophysics faculty, and during the war years did military research in optical design at Yerkes. He also spent time at McDonald Observatory, then jointly operated by the University of Chicago and the University of Texas, before accepting an offer from the California Institute of Technology to organize a new graduate program in optical astronomy in conjunction with the new 200-inch Hale Telescope at Palomar Observatory.

The Caltech astronomy program quickly became the premier academic program of its kind in the world, with Greenstein serving as department head from 1948 to 1972. During the 24-year period, he spent more than 1,000 observing nights at Palomar and other major observatories, and also took up radio astronomy in 1955. He was a staff member at Mount Wilson and Palomar Observatories until 1979, when he retired from the Caltech faculty, and remained active in research for many years afterward. He stopped observing in 1983, but continued research on white dwarfs, M dwarfs, and the molecular composition of stars. Despite many chances to become an administrator, he remained a researcher for his entire life.

Greenstein's research interests largely centered on the physics of astronomical objects. In addition to stellar composition, he also worked on the synthesis of chemical elements in stellar interiors, studied the physical processes of radio-emitting sources, worked with Caltech colleague Maarten Schmidt on the high redshift of quasars in 1963, demonstrated that quasars are quite compact objects, and discovered and studied more than 500 white dwarfs. In later years, he studied the magnetic fields of white dwarfs, established their luminosities, and worked on ultraviolet spectroscopy with data obtained from the IUE satellite.

A common thread of his research endeavors, Greenstein wrote, "was that they were pioneering thrusts, attempts to provide first tests of a variety of physical laws under extreme conditions in the inaccessible but convenient experimental laboratories of the stars."

Greenstein was active in the establishment of the National Radio Astronomy Observatory, served as chair of the board of the Association of University Research in Astronomy, and was a former member of the Harvard Board of Overseers. He also played a pivotal role in organizing various national astronomical facilities, serving as chair of the 1970 decadal review of astronomy for the National Research Council (for which the Greenstein Report was issued), and served on the National Academy of Sciences' committee on science engineering and public policy.

He was elected to the National Academy of Sciences in 1957.

During his 72-year career in astrophysics, Greenstein was named California Scientist of the Year in 1964, was awarded the NASA Distinguished Public Service Medal in 1974, and the Gold Medal of the Royal Astronomical Society in 1975. He was presented the Centennial Medal by Harvard, and was named to the American Academy of Achievement in 1982.

He is survived by two sons, Peter Greenstein of Oakland, California, and George Greenstein of Amherst, Massachusetts. Naomi Kitay Greenstein, his wife of 68 years, whom he met as a 16-year-old Harvard undergraduate, died earlier this year. The Greensteins were often commended for the warmth and hospitality they extended to astronomers throughout the world. Naomi Greenstein also played a role in building the spirit of the astronomy group at Caltech.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

President Bush Nominates Caltech Physicist To National Science Board

Barry Barish, an experimental high-energy physicist at the California Institute of Technology, has been nominated to the National Science Board by President George W. Bush. The White House made the announcement Thursday, October 17.

Barish is the Linde Professor of Physics at Caltech, and since 1997 has been director of the Laser Interferometer Gravitational-Wave Observatory (LIGO) project, a National Science Foundation–funded collaboration between Caltech and MIT for detecting gravitational waves from exotic sources such as colliding black holes. He is a member of the National Academy of Sciences.

The eight new appointees must be approved by the U.S. Senate. If they are accepted, Barish will help oversee the National Science Foundation and advise the president and the congress on a broad range of policy issues related to science, engineering, and education. The 24-member board initiates and conducts studies, presents the results and board recommendations in reports and policy statements to the president and the congress, and makes these documents available to the research and educational communities and the general public.

The board meets in Washington, D.C., at least five times a year, with individual members also serving on committees. The board also publishes the biennial Science and Engineering Indicators.

As a high-energy physicist, Barish has been involved through the years with some of the highest-profile projects in the United States and abroad. A graduate of the University of California at Berkeley, Barish has been at Caltech since 1963. He was leader of one of the large detectors for the Superconducting Supercollider before the project was cancelled, searched for magnetic monopoles in the underground experiment below the Gran Sasso Mountain in Italy, performed several experiments at the Stanford Linear Accelerator Center, and is presently involved in the neutrino experiment inside the Soudan Underground Mine in Minnesota.

He was also responsible for the experiment at Fermilab that provided definitive evidence of the weak neutral current, the linchpin of the electroweak theory for which Sheldon Glashow, Abdus Salam, and Steven Weinberg won the Nobel Prize.

The project he currently leads, the Laser Interferometer Gravitational-Wave Observatory, recently began collecting data in the quest to study gravitational waves, which were predicted long ago by Einstein but thus far have been detected only indirectly. The LIGO project aims not only to demonstrate the existence of gravitational waves within the next few years, but also to pioneer a new type of astrophysical observation by studying exotic objects such as colliding black holes, supernovae, and neutron-star and black-hole interactions.

The National Science Board was created by an act of congress in 1950. Its official mission is to "promote the progress of science; advance the national health, prosperity, and welfare; and secure the national defense."

Contact: Robert Tindol (626) 395-3631

Writer: 
RT
Exclude from News Hub: 
Yes

Caltech researchers devisenew microdevice for fluid analysis

Researchers at the California Institute of Technology announced today a new paradigm for large-scale integration of microfluidic devices. Using new techniques, they built chips with as many as 6,000 microvalves and up to 1,000 tiny individual chambers.

The technology is being commercialized by Fluidigm in San Francisco, which is using multi-layer soft lithography (MSL) techniques to create microfluidic chips to run the smallest-volume polymerase chain reactions documented—20,000 parallel reactions at volumes of 100 picoliters.

In a paper to appear in the journal Science, Caltech associate professor of applied physics and physics Stephen Quake and his colleagues describe the research on picoliter-scale chambers. Quake's team describes the 1,000 individually addressable chambers, and also demonstrates on a separate device with more than 2,000 microvalves, that two different reagents can be separately loaded to perform distinct assays in two subnanoliter chambers and then recover the contents of a single chamber.

According to Quake, who cofounded Fluidigm, the devices should have many new scientific, commercial, and biomedical applications. "We now have the tools in hand to design complex microfluidic systems and, through switchable isolation, recover contents from a single chamber for further investigation."

"Together, these advancements speak to the power of MSL technology to achieve large-scale integration and the ability to make a commercial impact in microfluidics," said Gajus Worthington, President and CEO of Fluidigm. "PCR is the cornerstone of genomics applications. Fluidigm's microprocessor, coupled with the ability to recover results from the chip, offers the greatest level of miniaturization and integration of any platform," added Worthington.

Fluidigm hopes to leverage these advancements as it pursues genomics and proteomics applications. Fluidigm has already shipped a prototype product for protein crystallization that transforms decades-old methodologies to a chip-based format, vastly reducing sample input requirements and improving cost and labor by orders of magnitude.

Contact: Robert Tindol (626) 395-3631 t

Writer: 
RT

MacArthur Foundation certifies two Caltech professors as geniuses

Two members of the California Institute of Technology faculty have been named MacArthur Fellows, a prestigious honor bestowed each year on innovators in a variety of fields and commonly known as the "genius grants."

Charles Steidel, an astronomer, and Paul Wennberg, an atmospheric scientist, are two of the 24 MacArthur Fellows announced today by the John D. and Catherine T. MacArthur Foundation of Chicago. Each of the 24 recipients will receive a $500,000 "no strings attached" grant over the next five years.

Steidel's expertise is cosmology, a field to which he has made numerous contributions in the ongoing attempt to understand the formation and evolution of galaxies and the development of large-scale structure in the universe. In particular, Steidel is known for the development of a technique that effectively locates early galaxies at prescribed cosmic epochs, allowing for the study of large samples of galaxies in the early universe.

Access to these large samples, which are observed primarily using the Keck telescopes on Mauna Kea on the Big Island of Hawaii, allows for the mapping of the distribution of the galaxies in space and for detailed observations of many individual galaxies. These are providing insights into the process of galaxy formation when the universe was only 10 to 20 percent of its current age.

Steidel says he hasn't yet decided what to do with the grant money. "I'm giving it some thought, but I'm still in the disbelief phase—it took me completely by surprise!" he said.

"The unique nature of the fellowship makes me feel like I should put a great deal of thought into coming up with a creative use for the money. It does feel a bit odd to be recognized for work that is by its nature collaborative and dependent on the hard work of many people, but at the same time I am very excited by the possibilities!"

A graduate of Princeton University and the California Institute of Technology, Steidel was a faculty member at MIT before returning to Caltech, where he is now a professor of astronomy. He is also a past recipient of fellowships from the Sloan and Packard foundations, and received a Young Investigator Award from the National Science Foundation in 1994. In 1997 he was presented the Helen B. Warner Prize by the American Astronomical Society for his significant early-career contributions to astronomy.

Wennberg holds joint appointments as a professor of atmospheric chemistry and a professor of environmental science and engineering. A specialist in how both natural and human processes affect the atmosphere, Wennberg is particularly interested in measuring a class of substances known as radicals and how they enter into atmospheric chemical reactions. These radicals are implicated in processes that govern the health of the ozone layer as well as the presence of greenhouse gases.

Wennberg has earned recognition in the field for developing airborne sensors to study radicals and their chemistry. One of the early scientific results from these measurements demonstrated that conventional thinking was incorrect about how ozone is destroyed in the lower stratosphere, affecting assessments of the environmental impacts of chlorofluorocarbons and stratospheric aircraft.

Wennberg said he was "blown over by the award" when he received notification. "It is a wonderful recognition of the work that I have done in association with the atmospheric scientists working on NASA's U-2 aircraft chemistry program."

"I have been pondering how I might use the funds, but have no concrete plans at the moment. It will certainly enable me to do things I wouldn't have thought possible—perhaps even take up the bassoon again! "

A graduate of Oberlin College and Harvard University, Wennberg was a research associate at Harvard before joining the Caltech faculty. In 1999 he was named recipient of a Presidential Early Career Award in Science and Engineering.

Writer: 
RT

Five Caltech Faculty Members Elected to Membership in the American Academy of Arts and Sciences

PASADENA, Calif. — The American Academy of Arts and Sciences has announced that five members of the Caltech faculty have been elected to membership in the academy for contributions to their respective scientific fields.

The Caltech faculty members who have been elected are Richard Andersen, Boswell Professor of Neuroscience; David Anderson, professor of biology and investigator with the Howard Hughes Medical Institute (HHMI); Ronald Drever, professor of physics, emeritus; Mary Kennedy, Davis Professor of Biology; and Mark Wise, McCone Professor of High Energy Physics.

Richard Andersen is receiving recognition for his work in the fields of neuroscience, cognitive science, and behavioral biology. With the assistance of his postdoctoral and graduate students, he has examined the functions of the brain in relation to seeing, hearing, orientation, balance, and movement planning.

The author of more than 130 scholarly articles on the functions of the brain, Andersen has been honored with the Spencer Award from Columbia University, the McKnight Foundation Scholars Award, a Sloan Foundation Fellowship, a Regent's Fellowship, and an Abraham Rosenberg Fellowship, and is a Fellow of the American Association for the Advancement of Science.

David Anderson is being honored for his work in the fields of neurobiology, developmental biology, and genetics, where he has been able to make advances in stem-cell research that he hopes will eventually help fight brain diseases and spinal-cord injuries. Anderson has also made important discoveries in the field of angiogenesis, the study of blood vessel formation.

Anderson, who has authored more than 140 scholarly publications in the field of genetics and neuroscience, has also been honored with the Searle Scholars Award, the Charles Judson Herrick Award in Comparative Neurology, and the W. Alden Spencer Award in Developmental Neurobiology from Columbia University. His current affiliations include the American Association for the Advancement of Science, the Society for Neuroscience, and the Neuron editorial board.

Ronald Drever is being recognized for his work relating to gravitational physics and for his pioneering research on gravitational radiation detection. His group carried out early searches for gravitational waves, and he was cofounder of the Laser Interferometer Gravitational-Wave Observatory, a project shared by Caltech and MIT. Drever invented many of the techniques in gravitational-wave detection, including a high-precision method for controlling laser frequency now widely used in many science and technology applications.

Drever is a Fellow of the American Physical Society and is a former vice president of the Royal Astronomical Society.

Mary Kennedy is being honored for her contributions to the field of brain biochemistry and the mechanisms of learning and memory. Her research group is studying the effects of proteins in the brain and their relation to how memories are stored.

Kennedy holds numerous memberships and has been the recipient of several grants, as well as publishing a number of scientific works. Her honors include a McKnight Neuroscience Development Award, and she is an elected councilor of the Society for Neuroscience. Kennedy has also received a Faculty Award for Women Scientists and Engineers, and she is a member of the scientific advisory board of the Hereditary Disease Foundation, and the Scientific Advisory Board of the French Foundation for Alzheimer Research.

Mark Wise is receiving membership for his involvement in the field of high-energy physics, where he has developed information on the essential characteristics of particles and how they interact with each other to create the physical world.

Wise has been the recipient of a Sloan Foundation research grant and the Sakurai Prize, which reflects the admiration of his peers for his work and accomplishments in his field. Wise is also a member of the American Physical Society.

Founded in 1780 in Cambridge, Massachusetts, the American Academy of Arts and Sciences serves as a hub for complex study and discussion of multidisciplinary problems. This year, the academy elected 177 fellows and 30 foreign honorary members.

CONTACT: Ken Watson, Media Relations (626) 395-3227 Visit the Caltech Media Relations Web site: http://pr.caltech.edu/media

Writer: 
KW
Exclude from News Hub: 
Yes

Researchers make progress in understanding the basics of high-temperature superconductivity

High-temperature superconductors have long been the darlings of materials science because they can transfer electrical current with no resistance or heat loss. Already demonstrated in technologies such as magnetic sensors, magnetic resonance imaging (MRI), and microwave filters in cellular-phone base stations, superconductors are potentially one of the greatest technological triumphs of the modern world if they could just be made to operate more reliably at higher temperatures. But getting there will probably require a much better understanding of the basic principles of superconductivity at the microscopic level.

Now, physicists at the California Institute of Technology have made progress in understanding at a microscopic level how and why high-temperature superconductivity can occur. In a new study appearing in the June 3 issue of Physical Review Letters, Caltech physics professor Nai-Chang Yeh and her colleagues report on the results of an atomic-scale microprobe revealing that the only common features among many families of high-temperature superconductors are paired electrons moving in tandem in a background of alternately aligned quantum magnets. The paper eliminates many other possibilities that have been suggested for explaining the phenomenon.

Yeh and her collaborators from Caltech, the Jet Propulsion Laboratory, and Pohang University of Science and Technology in Korea report on their findings on "strongly correlated s-wave superconductivity" in the simplest form of ceramic superconductors, which are based on copper oxides, or cuprates. The paper differentiates the behavior of the two basic types of high-temperature superconductors that have been studied since the mid-1980s—the "electron doped" type that contains added electrons in its lattice-work, and the "hole-doped" type that has open slots for electrons.

The cuprate materials were discovered to be superconductors in the 1980s, thereby instantaneously raising the temperature at which superconductivity could be demonstrated in the lab. This allowed researchers to produce devices that could be cooled to superconductivity with commonly available liquid nitrogen, which is used in a huge variety of industrial processes throughout the world. Before the high-temperature superconducting materials were discovered, experts could achieve superconductivity only by cooling the materials with liquid helium, which is much more expensive and difficult to make.

The arrival of high-temperature superconductivity heralded speculation on novel applications and machines, including virtually frictionless high-speed magnetically levitated trains, as well as power transmission at a fraction of the current cost. Indeed the progress of the 1980s led to demonstrations of technologies such as magnetic sensors, microwave filters, and small-scale electronic circuits that could potentially increase the speed of computers by many thousands of times.

A certain amount of progress has been made since the high-temperature superconductors were discovered, and researchers remain optimistic that even the current generation may be adequate for such futuristic devices as extremely high-speed computers, provided that other technological hurdles can be overcome. But a primary roadblock to rapid progress has been and continues to be a limited understanding of precisely how high-temperature superconductivity works at the microscopic level.

A better fundamental understanding would allow researchers better to determine which materials to use in applications, which manufacturing procedures to employ, and possibly how to design new cuprates with higher superconducting transition temperatures. This is important because researchers would have a better idea of the molecular architecture most essential to the desired properties.

In this sense, Yeh and her colleagues' new paper is a step toward a more fundamental understanding of the phenomenon. "The bottom line is that we can eliminate a lot of things people thought were essential for high-temperature superconductivity," she says. "I feel that we have narrowed down the possibilities for the mechanism."

More specifically, the type of cuprates investigated by the Caltech team has the simplest form among all cuprate superconductors, with a structure consisting of periodic stacks of one copper oxide layer followed by one layer of metal atoms. This structure differs from all other cuprates in that multiple layers of complex components between consecutive copper oxide layers are absent, and the latter are known to be the building blocks of high-temperature superconductivity.

This unique structure appears to have a profound effect on the superconducting properties of the cuprate, resulting in a more three-dimensional "s-wave pairing symmetry" for the tandem motion of electrons in the simplest cuprate, in contrast to the more two-dimensional "d-wave pairing symmetry" in most other cuprates. This finding eliminates the commonly accepted notion that d-wave pairing may be essential to the occurrence of high-temperature superconductivity.

Another new finding is the absence of the "pseudogap phenomenon," the existence of which would imply that electrons or holes could begin to form pairs at relatively high temperatures, although these pairs could not move in tandem until the temperature fell below the superconducting transition temperature. The pseudogap phenomenon is quite common in many cuprates, and physicists have long speculated that its existence may be of fundamental importance. The absence of pseudogap, as found in the simplest form of cuprates, can now effectively rule out theories for high-temperature superconductivity based on the pseudogap phenomenon.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Researchers make plasma jets in the labthat closely resemble astrophysical jets

Astrophysical jets are one of the truly exotic sights in the universe. They are usually associated with accretion disks, which are disks of matter spiraling into a central massive object such as a star or a black hole. The jets are very narrow and shoot out along the disk axis for huge distances at incredibly high speeds.

Jets and accretion disks have been observed to accompany widely varying types of astrophysical objects, ranging from proto-star systems to binary stars to galactic nuclei. While the mechanism for jet formation is the subject of much debate, many of the proposed theoretical models predict that jets form as the result of magnetic forces.

Now, a team of applied physicists at the California Institute of Technology have brought this seemingly remote phenomenon into the lab. By using technology originally developed for creating a magnetic fusion configuration called a spheromak, they have produced plasmas that incorporate the essential physics of astrophysical jets. (Plasmas are ionized gases and are excellent electrical conductors; everyday examples of plasmas are lightning, the northern lights, and the glowing gas in neon signs.)

Reporting in an upcoming issue of the Monthly Notices of the Royal Astronomical Society, Caltech professor of applied physics Paul Bellan and postdoctoral scholar Scott Hsu describe how their work helps explain the magnetic dynamics of these jets. By placing two concentric copper electrodes and a coaxial coil in a large vacuum vessel and driving huge electric currents through hydrogen plasma, these scientists have succeeded in producing jet-like structures that not only resemble those in astronomical images, but also develop remarkable helical instabilities that could help explain the "wiggled" structure observed in some astrophysical jets.

"Photographs clearly show that the jet-like structures in the experiment form spontaneously," says Bellan, who studies laboratory plasma physics but chanced upon the astrophysical application when he was looking at how plasmas with large internal currents can self-organize. "We originally built this experiment to study spheromak formation, but it also dawned on us that the combination of electrode structure, applied magnetic field, and applied voltage is similar to theoretical descriptions of accretion disks, and so might produce jet-like plasmas."

The theory Bellan refers to states that jets can be formed when magnetic fields are twisted up by the rotation of accretion disks. Magnetic field lines in plasma are like elastic bands frozen into jello. The electric currents flowing in the plasma (jello) can change the shape of the magnetic field lines (elastic bands) and thus change the shape of the plasma as well. Magnetic forces associated with these currents squeeze both the plasma and its embedded magnetic field into a narrow jet that shoots out along the axis of the disk.

By applying a voltage differential across the gap between the two concentric electrodes, Bellan and Hsu effectively simulate an accretion disk spinning in the presence of a magnetic field. The coil produces magnetic field lines linking the two concentric electrodes in a manner similar to the magnetic field linking the central object and the accretion disk.

In the experiment an electric current of about 100 kiloamperes is driven through the tenuous plasma, resulting in two-foot-long jet-like structures traveling at approximately 90 thousand miles per hour. More intense currents cause a jet to become unstable so that it deforms into a theoretically predicted helical shape known as a kink. Even greater currents cause the kinked jets to break off and form a spheromak. The jets last about 5 to 10 millionths of a second, and are photographed with a special high-speed camera.

"These things are very scalable, which is why we're arguing that the work applies to astrophysics," Bellan explains. "If you made the experiment the size of Pasadena, for example, the jets might last one second; or if it were the size of the earth, they would last about 10 minutes. But obviously, that's impractical."

The importance of the study, Bellan and Hsu say, is that it provides compelling evidence in support of the idea that astrophysical jets are formed by magnetic forces associated with rotating accretion disks, and it also provides quantitative information on the stability properties of these jets.

The work was supported by a grant from the U.S. Department of Energy.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Pages

Subscribe to RSS - PMA