Astronomers find new evidence aboutuniverse's heaviest phase of star formation

New distance measurements from faraway galaxies further strengthen the view that the strongest burst of star formation in the universe occurred about two billion years after the Big Bang.

Reporting in the April 17 issue of the journal Nature, California Institute of Technology astronomers Scott Chapman and Andrew Blain, along with their United Kingdom colleagues Ian Smail and Rob Ivison, provide the redshifts of 10 extremely distant galaxies which strongly suggest that the most luminous galaxies ever detected were produced over a rather short period of time. Astronomers have long known that certain galaxies can be seen about a billion years after the Big Bang, but a relatively recent discovery of a type of extremely luminous galaxy -- one that is very faint in visible light, but much brighter at longer wavelengths -- is the key to the new results.

This type of galaxy was first found in 1997 using a new and much more sensitive camera for observing at submillimeter wavelengths (longer than the wavelengths of visible light that allows us to see, but somewhat shorter than radio waves). The camera was attached to the James Clerk Maxwell Telescope (JCMT), on Mauna Kea in Hawaii.

Submillimeter radiation is produced by warm galactic "dust" -- micron-sized solid particles similar to diesel soot that are interspersed between the stars in galaxies. Based on their unusual spectra, experts have thought it possible that these "submillimeter galaxies" could be found even closer in time to the Big Bang.

Because the JCMT cannot see details of the sky that are as fine as details seen by telescopes operating at visible and radio wavelengths, and because the submillimeter galaxies are very faint, researchers have had a hard time determining the precise locations of the submillimeter galaxies and measuring their distances. Without an accurate distance, it is difficult to tell how much energy such galaxies produce; and with no idea of how powerful they are, it is uncertain how important such galaxies are in the universe.

The new results combine the work of several instruments, including the Very Large Array in New Mexico (the world's most sensitive radio telescope), and one of the 10-meter telescopes at the W. M. Keck Observatory on Mauna Kea, which are the world's largest optical telescopes. These instruments first pinpointed the position of the submillimeter galaxies, and then measured their distances. Today's article in Nature reports the first 10 distances obtained.

The Keck telescope found the faint spectral signature of radiation that is emitted, at a single ultraviolet wavelength of 0.1215 micrometers, by hydrogen gas excited by either a large number of hot, young stars or by the energy released as matter spirals into a black hole at the core of a galaxy. The radiation is detected at a longer, redder wavelength, having been Doppler shifted by the rapid expansion of the universe while the light has been traveling to Earth.

All 10 of the submillimeter galaxies that were detected emitted the light that we see today when the universe was less than half its present age. The most distant produced its light only two billion years after the Big Bang (12 billion years ago). Thus, the submillimeter galaxies are now confirmed to be the most luminous type of galaxies in the universe, several hundred times more luminous than our Milky Way, and 10 trillion times more luminous than the sun.

It is likely that the formation of such extreme objects had to wait for a certain size of a galaxy to grow from an initially almost uniform universe and to become enriched with carbon, silicon, and oxygen from the first stars. The time when the submillimeter galaxies shone brightly can also provide information about how the sizes and makeup of galaxies developed at earlier times.

By detecting these galaxies, the Caltech astronomers have provided an accurate census of the most extreme galaxies in the universe at the peak of their activity and witnessed the most dramatic period of star buildup yet seen in the Milky Way and nearby galaxies. Now that their distances are known accurately, other measurements can be made to investigate the details of their power source, and to find out what galaxies will result when their intense bursts of activity come to an end.

James Clerk Maxwell Telescope is at http://www.jach.hawaii.edu/JACpublic/JCMT The Very Large Array is at http://www.aoc.nrao.edu/vla/html. Keck Observatory is at http:/www.astro.caltech.edu/mirror/keck/index.html

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Discovery of giant planar Hall effect could herald a generation of "spintronics" devices

A basic discovery in magnetic semiconductors could result in a new generation of devices for sensors and memory applications -- and perhaps, ultimately, quantum computation -- physicists from the California Institute of Technology and the University of California at Santa Barbara have announced.

The new phenomenon, called the giant planar Hall effect, has to do with what happens when the spins of current-carrying electrons are manipulated. For several years scientists have been engaged in exploiting electron spin for the creation of a new generation of electronic devices --hence the term "spintronics" -- and the Caltech-UCSB breakthrough offers a new route to realizing such devices.

The term "spintronics" is used instead of "electronics" because the technology is based on a new paradigm, says Caltech physics professor Michael Roukes. Rather than merely using an electric current to make them work, spintronic devices will also rely on the magnetic orientation (or spin) of the electrons themselves. "In regular semiconductors, the spin freedom of the electrical current carriers does not play a role," says Roukes. "But in the magnetic semiconductors we've studied, the spin polarization -- that is, the magnetism -- of electrical current carriers is highly ordered. Consequently, it can act as an important factor in determining the current flow in the electrical devices."

In the naturally unpolarized state, there is no particular order between one electron's spin and its neighbor's. If the spins are aligned, the result can be a change in resistance to current flow.

Such changes in resistance have long been known for metals, but the current research is the first time that semiconductor material has been constructed in such a way that spin-charge interaction is manifested as a very dramatic change in resistivity. The Caltech-UCSB team managed to accomplish this by carefully preparing a ferromagnetic semiconductor material made of gallium manganese arsenide (GaMnAs). The widely-used current technology employs sandwiched magnetic metal structures used for magnetic storage.

"You have much more freedom with semiconductors than metals for two reasons," Roukes explains. "First, semiconductor material can be made compatible with the mainstream of semiconductor electronics; and second, there are certain phenomena in semiconductors that have no analogies in metals."

Practical applications of spintronics will likely include new paradigms in information storage, due to the superiority of such semiconductor materials to the currently available dynamic random access memory (or DRAM) chips. This is because the semiconductor spintronics would be "nonvolatile," meaning that once the spins were aligned, the system would be as robust as a metal bar that has been permanently magnetized.

The spintronics semiconductors could also conceivably be used in magnetic logic to replace transistors as switches in certain applications. In other words, spin alignment would be used as a logic gate for faster circuits with lower energy usage.

Finally, the technology could possibly be improved so that the quantum states of the spins themselves might be used for logic gates in future quantum computers. Several research teams have quantum logic gates, but the setup is the size of an entire laboratory, rather than at chip scale, and therefore still unsuitable for device integration. By contrast, a spintronics-based device might be constructed as a solid-state system that could be integrated into microchips.

A full description of the Caltech-UCSB team's work appeared in the March 14 issue of Physical Review Letters [Tang et al, Vol 90, 107201 (2003)]. The article is available by subscription, but the main site can be accessed at http://prl.aps.org/. This discovery is also featured in the "News and Views" section of the forthcoming issue of Nature Materials.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Six Caltech Professors Awarded Sloan Research Fellowships

PASADENA, Calif.— Six Caltech professors recently received Alfred P. Sloan Research Fellowships for 2003.

The Caltech recipients in the field of chemistry are Paul David Asimow, assistant professor of geology and geochemistry, Linda C. Hsieh-Wilson, Jonas C. Peters, and Brian M. Stoltz, assistant professors of chemistry. In mathematics, a Sloan Fellowship was awarded to Danny Calegari, associate professor of mathematics, and in neuroscience, to Athanassios G. Siapas, assistant professor of computation and neural systems.

Each Sloan Fellow receives a grant of $40,000 for a two-year period. The grants of unrestricted funds are awarded to young researchers in the fields of physics, chemistry, computer science, mathematics, neuroscience, computational and evolutionary molecular biology, and economics. The grants are given to pursue diverse fields of inquiry and research, and to allow young scientists the freedom to establish their own independent research projects at a pivotal stage in their careers. The Sloan Fellows are selected on the basis of "their exceptional promise to contribute to the advancement of knowledge."

From over 500 nominees, a total of 117 young scientists and economists from 50 different colleges and universities in the United States and Canada, including Caltech's six, were selected to receive a Sloan Research Fellowship.

Twenty-eight former Sloan Fellows have received Nobel prizes.

"It is a terrific honor to receive this award and to be a part of such a tremendous tradition of excellence within the Sloan Foundation," said Stoltz. Asimow commented that he will use his Sloan Fellowship to "support further investigation into the presence of trace concentrations of water in the deep earth... I'm pleased because funds that are unattached to any particular grant are enormously useful for seeding new and high-risk projects that are not quite ready to turn into proposals." On his research, Peters said, "The Sloan award will provide invaluable seed money for work we've initiated in the past few months regarding nitrogen reduction using molecular iron systems."

The Alfred P. Sloan Research Fellowship program was established in 1955 by Alfred P. Sloan, Jr., who was the chief executive officer of General Motors for 23 years. Its objective is to encourage research by young scholars at a time in their careers when other support may be difficult to obtain. It is the oldest program of the Alfred P. Sloan Foundation and one of the oldest fellowship programs in the country.

Contact: Deborah Williams-Hedges (626) 395-3227 debwms@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media

###

Writer: 
DWH
Writer: 
Exclude from News Hub: 
Yes

Quick action by astronomers worldwide leadsto new insights on mysterious gamma-ray bursts

Scientists "arriving quickly on the scene" of an October 4 gamma-ray burst have announced that their rapid accumulation of data has provided new insights about this exotic astrophysical phenomenon. The researchers have seen, for the first time, ongoing energizing of the burst afterglow for more than half an hour after the initial explosion.

The findings support the "collapsar" model, in which the core of a star 15 times more massive than the sun collapses into a black hole. The black hole's spin, or magnetic fields, may be acting like a slingshot, flinging material into the surrounding debris.

The prompt observation—and by far the most detailed to date—was made possible by several ground- and space-based observatories operating in tandem. The blast was initially detected by NASA's High-Energy Transient Explorer (HETE) satellite, and follow-up observations were quickly undertaken using ground-based robotic telescopes and fast-thinking researchers around the globe. The results are reported in the March 20 issue of the journal Nature.

"If a gamma-ray burst is the birth cry of a black hole, then the HETE satellite has just allowed us into the delivery room," said Derek Fox, a postdoctoral researcher at the California Institute of Technology and lead author of the Nature paper. Fox discovered the afterglow, or glowing embers of the burst, using the Oschin 48-inch telescope located at Caltech's Palomar Observatory.

Gamma-ray bursts shine hundreds of times brighter than a supernova, or as bright as a million trillion suns. The mysterious bursts are common, yet random and fleeting. The gamma-ray portion of a burst typically lasts from a few milliseconds to a couple of minutes. An afterglow, caused by shock waves from the explosion sweeping up matter and ramming it into the region around the burst, can linger for much longer, releasing energy in X rays, visible light, and radio waves. It is from the studies of such afterglows that astronomers can hope to learn more about the origins and nature of these extreme cosmic explosions.

This gamma-ray burst, called GRB021004, appeared on October 4, 2002, at 8:06 a.m. EDT. Seconds after HETE detected the burst, an e-mail providing accurate coordinates was sent to observatories around the world, including Caltech's Palomar Observatory. Fox pinpointed the afterglow shortly afterward from images captured by the Oschin Telescope within minutes of the burst, and notified the astronomical community through a rapid e-mail system operated by NASA for the follow-up studies of gamma-ray bursts. Then the race was on, as scientists in California, across the Pacific, Australia, Asia, and Europe employed more than 50 telescopes to zoom in on the afterglow before the approaching sunrise.

At about the same time, the afterglow was detected by the Automated Response Telescope (ART) in Japan, a 20-centimeter instrument located in Wako, a Tokyo suburb, and operated by the Japanese research institute RIKEN. The ART started observing the region a mere 193 seconds after the burst, but it took a few days for these essential observations to be properly analyzed and distributed to the astronomical community.

Analysis of these rapid observations produced a surprise: fluctuations in brightness, which scientists interpreted as the evidence for a continued injection of energy into the afterglow, well after the burst occurred. According to Shri Kulkarni, who is the McArthur Professor of Astronomy and Planetary Science at Caltech, the newly observed energizing of the burst afterglow indicates that the power must have been provided by whatever object produced the gamma-ray burst itself.

"This ongoing energy shows that the explosion is not a simple, one-time event, but that the central source lives for a longer time," said Kulkarni, a co-author of the Nature paper. "This is bringing us closer to a full understanding of these remarkable cosmic flashes."

Added Fox, "In the past we used to be impressed by the energy release in gamma-rays alone. These explosions appear to be more energetic than meets the eye."

Later radio observations undertaken at the Very Large Array in New Mexico and other radio telescopes, including Caltech's Owens Valley Radio Observatory and the IRAM millimeter telescope in France, lend further support to the idea that the explosions continued increasing in energy. "Whatever monster created this burst just refused to die quietly," said D. A. Frail, co-author and a staff astronomer at the Very Large Array.

Fox and his colleagues relied on data from the RIKEN telescope, in Japan, and from the Palomar Oschin Telescope and its Near Earth Asteroid Tracking (NEAT) camera, an instrument that has been roboticized and is currently managed by a team of astronomers at JPL led by Steven Pravdo. The collaboration of the Caltech astronomers and the NEAT team has proven extremely fruitful for the global astronomical community, helping to identify fully 25 percent of the afterglows discovered worldwide since Fox retrofitted the telescope software for this new task in the autumn of 2001.

HETE is the first satellite to provide and distribute accurate burst locations within seconds. The principal investigator for the HETE satellite is George Ricker of the Massachussetts Institute of Technology. HETE was built as a "mission of opportunity" under the NASA Explorer Program, a collaboration among U.S. universities, Los Alamos National Laboratory, and scientists and organizations in Brazil, France, India, Italy, and Japan.

###

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Caltech computer scientists develop FAST protocol to speed up Internet

Caltech computer scientists have developed a new data transfer protocol for the Internet fast enough to download a full-length DVD movie in less than five seconds.

The protocol is called FAST, standing for Fast Active queue management Scalable Transmission Control Protocol (TCP). The researchers have achieved a speed of 8,609 megabits per second (Mbps) by using 10 simultaneous flows of data over routed paths, the largest aggregate throughput ever accomplished in such a configuration. More importantly, the FAST protocol sustained this speed using standard packet size, stably over an extended period on shared networks in the presence of background traffic, making it adaptable for deployment on the world's high-speed production networks.

The experiment was performed last November during the Supercomputing Conference in Baltimore, by a team from Caltech and the Stanford Linear Accelerator Center (SLAC), working in partnership with the European Organization for Nuclear Research (CERN), and the organizations DataTAG, StarLight, TeraGrid, Cisco, and Level(3).

The FAST protocol was developed in Caltech's Networking Lab, led by Steven Low, associate professor of computer science and electrical engineering. It is based on theoretical work done in collaboration with John Doyle, a professor of control and dynamical systems, electrical engineering, and bioengineering at Caltech, and Fernando Paganini, associate professor of electrical engineering at UCLA. It builds on work from a growing community of theoreticians interested in building a theoretical foundation of the Internet, an effort in which Caltech has been playing a leading role.

Harvey Newman, a professor of physics at Caltech, said the fast protocol "represents a milestone for science, for grid systems, and for the Internet."

"Rapid and reliable data transport, at speeds of one to 10 Gbps and 100 Gbps in the future, is a key enabler of the global collaborations in physics and other fields," Newman said. "The ability to extract, transport, analyze and share many Terabyte-scale data collections is at the heart of the process of search and discovery for new scientific knowledge. The FAST results show that the high degree of transparency and performance of networks, assumed implicitly by Grid systems, can be achieved in practice. In a broader context, the fact that 10 Gbps wavelengths can be used efficiently to transport data at maximum speed end to end will transform the future concepts of the Internet."

Les Cottrell of SLAC, added that progress in speeding up data transfers over long distance are critical to progress in various scientific endeavors. "These include sciences such as high-energy physics and nuclear physics, astronomy, global weather predictions, biology, seismology, and fusion; and industries such as aerospace, medicine, and media distribution.

"Today, these activities often are forced to share their data using literally truck or plane loads of data," Cottrell said. "Utilizing the network can dramatically reduce the delays and automate today's labor intensive procedures."

The ability to demonstrate efficient high performance throughput using commercial off the shelf hardware and applications, standard Internet packet sizes supported throughput today's networks, and requiring modifications to the ubiquitous TCP protocol only at the data sender, is an important achievement.

With Internet speeds doubling roughly annually, we can expect the performances demonstrated by this collaboration to become commonly available in the next few years, so the demonstration is important to set expectations, for planning, and to indicate how to utilize such speeds.

The testbed used in the Caltech/SLAC experiment was the culmination of a multi-year effort, led by Caltech physicist Harvey Newman's group on behalf of the international high energy and nuclear physics (HENP) community, together with CERN, SLAC, Caltech Center for Advanced Computing Research (CACR), and other organizations. It illustrates the difficulty, ingenuity and importance of organizing and implementing leading edge global experiments. HENP is one of the principal drivers and co-developers of global research networks. One unique aspect of the HENP testbed is the close coupling between R&D and production, where the protocols and methods implemented in each R&D cycle are targeted, after a relatively short time delay, for widespread deployment across production networks to meet the demanding needs of data intensive science.

The congestion control algorithm of the current Internet was designed in 1988 when the Internet could barely carry a single uncompressed voice call. The problem today is that this algorithm cannot scale to anticipated future needs, when the networks will be compelled to carry millions of uncompressed voice calls on a single path or support major science experiments that require the on-demand rapid transport of gigabyte to terabyte data sets drawn from multi-petabyte data stores. This protocol problem has prompted several interim remedies, such as using nonstandard packet sizes or aggressive algorithms that can monopolize network resources to the detriment of other users. Despite years of effort, these measures have proved to be ineffective or difficult to deploy.

They are, however, critical steps in our evolution toward ultrascale networks. Sustaining high performance on a global network is extremely challenging and requires concerted advances in both hardware and protocols. Experiments that achieve high throughput either in isolated environments or using interim remedies that by-pass protocol instability, idealized or fragile as they may be, push the state of the art in hardware and demonstrates its performance limit. Development of robust and practical protocols will then allow us to make effective use of the most advanced hardware to achieve ideal performance in realistic environments.

The FAST team addresses the protocol issues head-on to develop a variant of TCP that can scale to a multi-gigabit-per-second regime in practical network conditions. The integrated approach that combines theory, implementation, and experiment is what makes their research unique and fundamental progress possible.

Using standard packet size that is supported throughout today's networks, the current TCP typically achieves an average throughput of 266 Mbps, averaged over an hour, with a single TCP/IP flow between Sunnyvale near SLAC and CERN in Geneva, over a distance of 10,037 kilometers. This represents an efficiency of just 27 percent. The FAST TCP sustained an average throughput of 925 Mbps and an efficiency of 95 percent, a 3.5-times improvement, under the same experimental condition. With 10 concurrent TCP/IP flows, FAST achieved an unprecedented speed of 8,609 Mbps, at 88 percent efficiency, that is 153,000 times that of today's modem and close to 6,000 times that of the common standard for ADSL (Asymmetric Digital Subscriber Line) connections.

The 10-flow experiment sets another first in addition to the highest aggregate speed over routed paths. It is the combination of high capacity and large distance that causes performance problems. Different TCP algorithms can be compared using the product of achieved throughput and the distance of transfer, measured in bit-meter-per-second, or bmps. The world record for the current TCP is 10 peta (1 followed by 16 zeros) bmps, using a nonstandard packet size. The Caltech/SLAC experiment transferred 21 terabytes over six hours between Baltimore and Sunnyvale using standard packet size, achieving 34 peta bmps. Moreover, data was transferred over shared research networks in the presence of background traffic, suggesting that FAST can be backward compatible with the current protocol. The FAST team has started to work with various groups around the world to explore testing and deploying FAST TCP in communities that need multi-Gbps networking urgently.

The demonstrations used a 10 Gbps link donated by Level(3) between StarLight (Chicago) and Sunnyvale, as well as the DataTAG 2.5 Gbps link between StarLight and CERN, the Abilene backbone of Internet2, and the TeraGrid facility. The network routers and switches at StarLight and CERN were used together with a GSR 12406 router loaned by Cisco at Sunnyvale, additional Cisco modules loaned at StarLight, and sets of dual Pentium 4 servers each with dual Gigabit Ethernet connections at StarLight, Sunnyvale, CERN, and the SC2002 show floor provided by Caltech, SLAC, and CERN. The project is funded by the National Science Foundation, the Department of Energy, the European Commission, and the Caltech Lee Center for Advanced Networking.

One of the drivers of these developments has been the HENP community, whose explorations at the high-energy frontier are breaking new ground in our understanding of the fundamental interactions, structures and symmetries that govern the nature of matter and space-time in our universe. The largest HENP projects each encompasses 2,000 physicists from 150 universities and laboratories in more than 30 countries.

Rapid and reliable data transport, at speeds of 1 to 10 Gbps and 100 Gbps in the future, is a key enabler of the global collaborations in physics and other fields. The ability to analyze and share many terabyte-scale data collections, accessed and transported in minutes, on the fly, rather than over hours or days as is the current practice, is at the heart of the process of search and discovery for new scientific knowledge. Caltech's FAST protocol shows that the high degree of transparency and performance of networks, assumed implicitly by Grid systems, can be achieved in practice.

This will drive scientific discovery and utilize the world's growing bandwidth capacity much more efficiently than has been possible until now.

Writer: 
RT

Nanodevice breaks 1-GHz barrier

Nanoscientists have achieved a milestone in their burgeoning field by creating a device that vibrates a billion times per second, or at one gigahertz (1 GHz). The accomplishment further increases the likelihood that tiny mechanical devices working at the quantum level can someday supplement electronic devices for new products.

Reporting in the January 30 issue of the journal Nature, California Institute of Technology professor of physics, applied physics, and bioengineering Michael Roukes and his colleagues from Caltech and Case Western Reserve University demonstrate that the tiny mechanism operates at microwave frequencies. The device is a prototype and not yet developed to the point that it is ready to be integrated into a commercial application; nevertheless, it demonstrates the progress being made in the quest to turn nanotechnology into a reality—that is, to make useful devices whose dimensions are less than a millionth of a meter.

This latest effort in the field of NEMS, which is an acronym for "nanoelectromechanical systems," is part of a larger, emerging effort to produce mechanical devices for sensitive force detection and high-frequency signal processing. According to Roukes, the technology could also have implications for new and improved biological imaging and, ultimately, for observing individual molecules through an improved approach to magnetic resonance spectroscopy, as well as for a new form of mass spectrometry that may permit single molecules to be "fingerprinted" by their mass.

"When we think of microelectronics today, we think about moving charges around on chips," says Roukes. "We can do this at high rates of speed, but in this electronic age our mind-set has been somewhat tyrannized in that we typically think of electronic devices as involving only the movement of charge.

"But since 1992, we've been trying to push mechanical devices to ever-smaller dimensions, because as you make things smaller, there's less inertia in getting them to move. So the time scales for inducing mechanical response go way down."

Though a good home computer these days can have a speed of one gigahertz or more, the quest to construct a mechanical device that can operate at such speeds has required multiple breakthroughs in manufacturing technology. In the case of the Roukes group's new demonstration, the use of silicon carbide epilayers to control layer thickness to atomic dimensions and a balanced high-frequency technique for sensing motion that effectively transfers signals to macroscale circuitry have been crucial to success. Both advances were pioneered in the Roukes lab.

Grown on silicon wafers, the films used in the work are prepared in such a way that the end-products are two nearly-identical beams 1.1 microns long, 120 nanometers wide and 75 nanometers thick. When driven by a microwave-frequency electric current while exposed to a strong magnetic field, the beams mechanically vibrate at slightly more than one gigahertz.

Future work will include improving the nanodevices to better link their mechanical function to real-world applications, Roukes says. The issue of communicating information, or measurements, from the nanoworld to the everyday world we live in is by no means a trivial matter. As devices become smaller, it becomes increasingly difficult to recognize the very small displacements that occur at much shorter time-scales.

Progress with the nanoelectromechanical system working at microwave frequencies offer the potential for improving magnetic resonance imaging to the extent that individual macromolecules could be imaged. This would be especially important in furthering the understanding of the relationship between, for example, the structure and function of proteins. Also, the devices could be used in a novel form of mass spectrometry, and for sensing individual biomolecules in fluids, and perhaps for realizing solid-state manifestations of the quantum bit that could be exploited for future devices such as quantum computers.

The coauthors of the paper are Xue-Ming (Henry) Huang, a graduate student in physics at Caltech; and Chris Zorman and Mehran Mehrengany, both engineering professors at Case Western Reserve University.

Contact:Robert Tindol (626) 395-3631

Writer: 
RT

Earthbound experiment confirms theory accounting for sun's scarcity of neutrinos

PASADENA, Calif.- In the subatomic particle family, the neutrino is a bit like a wayward red-haired stepson. Neutrinos were long ago detected-and even longer ago predicted to exist-but everything physicists know about nuclear processes says there should be a certain number of neutrinos streaming from the sun, yet there are nowhere near enough.

This week, an international team has revealed that the sun's lack of neutrinos is a real phenomenon, probably explainable by conventional theories of quantum mechanics, and not merely an observational quirk or something unknown about the sun's interior. The team, which includes experimental particle physicist Robert McKeown of the California Institute of Technology, bases its observations on experiments involving nuclear power plants in Japan.

The project is referred to as KamLAND because the neutrino detector is located at the Kamioka mine in Japan. Properly shielded from radiation from background and cosmic sources, the detector is optimized for measuring the neutrinos from all 17 nuclear power plants in the country.

Neutrinos are produced in the nuclear fusion process, when two protons fuse together to form deuterium, a positron (in other words, the positively charged antimatter equivalent of an electron), and a neutrino. The deuterium nucleus hangs nearby, while the positron eventually annihilates both itself and an electron. The neutrino, being very unlikely to interact with matter, streams away into space.

Therefore, physicists would normally expect neutrinos to flow from the sun in much the same way that photons flow from a light bulb. In the case of the light bulb, the photons (or bundles of light energy) are thrown out radially and evenly, as if the surface of a surrounding sphere were being illuminated. And because the surface area of a sphere increases by the square of the distance, an observer standing 20 feet away sees only one-fourth the photons of an observer standing at 10 feet.

Thus, observers on Earth expect to see a given number of neutrinos coming from the sun-assuming they know how many nuclear reactions are going on in the sun-just as they expect to know the luminosity of a light bulb at a given distance if they know the bulb's wattage. But such has not been the case. Carefully constructed experiments for detecting the elusive neutrinos have shown that there are far fewer neutrinos than there should be.

A theoretical explanation for this neutrino deficit is that the neutrino "flavor" oscillates between the detectable "electron" neutrino type, and the much heavier "muon" neutrino and maybe even the "tau" neutrino, neither of which can be detected. Utilizing quantum mechanics, physicists estimate that the number of detectable electron neutrinos is constantly changing in a steady rhythm from 100 percent down to a small percentage and back again.

Therefore, the theory says that the reason we see only about half as many neutrinos from the sun as we should be seeing is because, outside the sun, about half the electron neutrinos are at that moment one of the undetectable flavors.

The triumph of the KamLAND experiment is that physicists for the first time can observe neutrino oscillations without making assumptions about the properties of the source of neutrinos. Because the nuclear power plants have a very precisely known amount of material generating the particles, it is much easier to determine with certainty whether the oscillations are real or not.

Actually, the fission process of the nuclear plants is different from the process in the sun in that the nuclear material breaks apart to form two smaller atoms, plus an electron and an antineutrino (the antimatter equivalent of a neutrino). But matter and antimatter are thought to be mirror-images of each other, so the study of antineutrinos from the beta-decays of the nuclear power plants should be exactly the same as a study of neutrinos.

"This is really a clear demonstration of neutrino disappearance," says McKeown. "Granted, the laboratory is pretty big-it's Japan-but at least the experiment doesn't require the observer to puzzle over the composition of astrophysical sources.

"Willy Fowler [the late Nobel Prize-winning Caltech physicist] always said it's better to know the physics to explain the astrophysics, rather than vice versa," McKeown says. "This experiment allows us to study the neutrino in a controlled experiment."

The results announced this week are taken from 145 days of data. The researchers detected 54 events during that time (an event being a collision of an antineutrino with a proton to form a neutron and positron, ultimately resulting in a flash of light that could be measured with photon detectors). Theory predicted that about 87 antineutrinos would have been seen during that time, if no oscillations occurred, but 54 events at an average distance of 175 kilometers if the oscillation is a real phenomenon.

According to McKeown, the experiment will run about three to five years, with experimentalists ultimately collecting data for several hundred events. The additional information should provide very accurate measurements of the energy spectrum predicted by theory when the neutrinos oscillate.

The experiment may also catch neutrinos if any supernovae occur in our galaxy, as well as neutrinos from natural events in Earth's interior.

In addition to McKeown's team at Caltech's Kellogg Radiation Lab, other partners in the study include the Research Center for Neutrino Science at Tohuku University in Japan, the University of Alabama, the University of California at Berkeley and the Lawrence Berkeley National Laboratory, Drexel University, the University of Hawaii, the University of New Mexico, Louisiana State University, Stanford University, the University of Tennessee, Triangle Universities Nuclear Laboratory, and the Institute of High Energy Physics in Beijing.

The project is supported in part by the U.S. Department of Energy.

 

 

Writer: 
Jill Perry
Writer: 

Caltech astronomer Jesse Greenstein dies; was early investigator of quasars, white dwarfs

Jesse L. Greenstein, an astrophysicist whose many accomplishments included seminal work on the nature of quasars, died Monday, October 21, 2002, three days after falling and breaking his hip. He was 93.

A native of New York City, Greenstein grew up in a family that actively encouraged his scientific interests. At the age of eight he received a brass telescope from his grandfather—not an unusual gift for an American child, but Greenstein soon was also experimenting in earnest with his own prism spectroscope, an arc, a rotary spark, a rectifier, and a radio transmitter. With the spectroscope he began his lifelong interest in identifying the composition of materials, a passion that would lead to his becoming a worldwide authority on the evolution and composition of stars.

Greenstein entered the Horace Mann School for Boys at the age of 11, and by 16 was a student at Harvard University. After earning his bachelor's degree in 1929 and his master's in 1930, he decided that it would be prudent, in the depths of the Great Depression, to join the family's real estate and finance business in New York. But by 1934 he was back at Harvard, earning his doctorate in 1937.

Greenstein won a National Research Council Fellowship in 1937, which allowed a certain amount of latitude in his place of employment. With the stipend, he chose to join the University of Chicago's Yerkes Observatory at Williams Bay, Wisconsin, remaining there for the duration of the two-year fellowship. In 1939 he joined the University of Chicago astrophysics faculty, and during the war years did military research in optical design at Yerkes. He also spent time at McDonald Observatory, then jointly operated by the University of Chicago and the University of Texas, before accepting an offer from the California Institute of Technology to organize a new graduate program in optical astronomy in conjunction with the new 200-inch Hale Telescope at Palomar Observatory.

The Caltech astronomy program quickly became the premier academic program of its kind in the world, with Greenstein serving as department head from 1948 to 1972. During the 24-year period, he spent more than 1,000 observing nights at Palomar and other major observatories, and also took up radio astronomy in 1955. He was a staff member at Mount Wilson and Palomar Observatories until 1979, when he retired from the Caltech faculty, and remained active in research for many years afterward. He stopped observing in 1983, but continued research on white dwarfs, M dwarfs, and the molecular composition of stars. Despite many chances to become an administrator, he remained a researcher for his entire life.

Greenstein's research interests largely centered on the physics of astronomical objects. In addition to stellar composition, he also worked on the synthesis of chemical elements in stellar interiors, studied the physical processes of radio-emitting sources, worked with Caltech colleague Maarten Schmidt on the high redshift of quasars in 1963, demonstrated that quasars are quite compact objects, and discovered and studied more than 500 white dwarfs. In later years, he studied the magnetic fields of white dwarfs, established their luminosities, and worked on ultraviolet spectroscopy with data obtained from the IUE satellite.

A common thread of his research endeavors, Greenstein wrote, "was that they were pioneering thrusts, attempts to provide first tests of a variety of physical laws under extreme conditions in the inaccessible but convenient experimental laboratories of the stars."

Greenstein was active in the establishment of the National Radio Astronomy Observatory, served as chair of the board of the Association of University Research in Astronomy, and was a former member of the Harvard Board of Overseers. He also played a pivotal role in organizing various national astronomical facilities, serving as chair of the 1970 decadal review of astronomy for the National Research Council (for which the Greenstein Report was issued), and served on the National Academy of Sciences' committee on science engineering and public policy.

He was elected to the National Academy of Sciences in 1957.

During his 72-year career in astrophysics, Greenstein was named California Scientist of the Year in 1964, was awarded the NASA Distinguished Public Service Medal in 1974, and the Gold Medal of the Royal Astronomical Society in 1975. He was presented the Centennial Medal by Harvard, and was named to the American Academy of Achievement in 1982.

He is survived by two sons, Peter Greenstein of Oakland, California, and George Greenstein of Amherst, Massachusetts. Naomi Kitay Greenstein, his wife of 68 years, whom he met as a 16-year-old Harvard undergraduate, died earlier this year. The Greensteins were often commended for the warmth and hospitality they extended to astronomers throughout the world. Naomi Greenstein also played a role in building the spirit of the astronomy group at Caltech.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

President Bush Nominates Caltech Physicist To National Science Board

Barry Barish, an experimental high-energy physicist at the California Institute of Technology, has been nominated to the National Science Board by President George W. Bush. The White House made the announcement Thursday, October 17.

Barish is the Linde Professor of Physics at Caltech, and since 1997 has been director of the Laser Interferometer Gravitational-Wave Observatory (LIGO) project, a National Science Foundation–funded collaboration between Caltech and MIT for detecting gravitational waves from exotic sources such as colliding black holes. He is a member of the National Academy of Sciences.

The eight new appointees must be approved by the U.S. Senate. If they are accepted, Barish will help oversee the National Science Foundation and advise the president and the congress on a broad range of policy issues related to science, engineering, and education. The 24-member board initiates and conducts studies, presents the results and board recommendations in reports and policy statements to the president and the congress, and makes these documents available to the research and educational communities and the general public.

The board meets in Washington, D.C., at least five times a year, with individual members also serving on committees. The board also publishes the biennial Science and Engineering Indicators.

As a high-energy physicist, Barish has been involved through the years with some of the highest-profile projects in the United States and abroad. A graduate of the University of California at Berkeley, Barish has been at Caltech since 1963. He was leader of one of the large detectors for the Superconducting Supercollider before the project was cancelled, searched for magnetic monopoles in the underground experiment below the Gran Sasso Mountain in Italy, performed several experiments at the Stanford Linear Accelerator Center, and is presently involved in the neutrino experiment inside the Soudan Underground Mine in Minnesota.

He was also responsible for the experiment at Fermilab that provided definitive evidence of the weak neutral current, the linchpin of the electroweak theory for which Sheldon Glashow, Abdus Salam, and Steven Weinberg won the Nobel Prize.

The project he currently leads, the Laser Interferometer Gravitational-Wave Observatory, recently began collecting data in the quest to study gravitational waves, which were predicted long ago by Einstein but thus far have been detected only indirectly. The LIGO project aims not only to demonstrate the existence of gravitational waves within the next few years, but also to pioneer a new type of astrophysical observation by studying exotic objects such as colliding black holes, supernovae, and neutron-star and black-hole interactions.

The National Science Board was created by an act of congress in 1950. Its official mission is to "promote the progress of science; advance the national health, prosperity, and welfare; and secure the national defense."

Contact: Robert Tindol (626) 395-3631

Writer: 
RT
Exclude from News Hub: 
Yes

Caltech researchers devisenew microdevice for fluid analysis

Researchers at the California Institute of Technology announced today a new paradigm for large-scale integration of microfluidic devices. Using new techniques, they built chips with as many as 6,000 microvalves and up to 1,000 tiny individual chambers.

The technology is being commercialized by Fluidigm in San Francisco, which is using multi-layer soft lithography (MSL) techniques to create microfluidic chips to run the smallest-volume polymerase chain reactions documented—20,000 parallel reactions at volumes of 100 picoliters.

In a paper to appear in the journal Science, Caltech associate professor of applied physics and physics Stephen Quake and his colleagues describe the research on picoliter-scale chambers. Quake's team describes the 1,000 individually addressable chambers, and also demonstrates on a separate device with more than 2,000 microvalves, that two different reagents can be separately loaded to perform distinct assays in two subnanoliter chambers and then recover the contents of a single chamber.

According to Quake, who cofounded Fluidigm, the devices should have many new scientific, commercial, and biomedical applications. "We now have the tools in hand to design complex microfluidic systems and, through switchable isolation, recover contents from a single chamber for further investigation."

"Together, these advancements speak to the power of MSL technology to achieve large-scale integration and the ability to make a commercial impact in microfluidics," said Gajus Worthington, President and CEO of Fluidigm. "PCR is the cornerstone of genomics applications. Fluidigm's microprocessor, coupled with the ability to recover results from the chip, offers the greatest level of miniaturization and integration of any platform," added Worthington.

Fluidigm hopes to leverage these advancements as it pursues genomics and proteomics applications. Fluidigm has already shipped a prototype product for protein crystallization that transforms decades-old methodologies to a chip-based format, vastly reducing sample input requirements and improving cost and labor by orders of magnitude.

Contact: Robert Tindol (626) 395-3631 t

Writer: 
RT

Pages

Subscribe to RSS - PMA