Caltech-MIT Team Finds 35% Improvementin Florida's Voting Technology

PASADENA, Calif. — If one measures election success by equipment performance alone, Florida's push to get new voting equipment on-line for the 2002 election appears to have paid off.

Compared with the performance of equipment in past Florida state primary elections, the new technologies for casting and counting ballots look like clear improvements according to experts at the California Institute of Technology and the Massachusetts Institute of Technology.

Researchers from the Caltech/MIT Voting Technology Project calculated the rate of residual votes (ballots on which no votes or too many votes were recorded) for the largest counties in Florida for the 2002 Democratic Gubernatorial Primary and for the last three Gubernatorial General Elections in Florida (1990, 1994, and 1998). These counties are Brevard, Broward, Duval, Hillsborough, Miami-Dade, Palm Beach, and Pinellas.

The residual vote rate, it appears, has been substantially reduced as a result of the election reform efforts of the past year. On average, 2.0 percent of Democratic voters recorded no vote for governor in these seven counties. In past elections, the average has been 3.1 percent. This is a 35 percent improvement in performance.

The largest apparent improvements came in Brevard and Duval counties, which switched from punch cards to optically scanned paper ballots. The remaining counties purchased new touch screen or Direct Recording Electronic (DRE) machines. All of the counties show some improvement in their capacity to record and count votes.

Residual Vote Rates for Governor in the 7 Largest Florida Counties

County

2002 Democratic Primary

Voting Equipment

Residual Vote Rate

   

2002

1998

Ave.

1998

General

1994 General

1990 General

Brevard

1.0%

Scanner

Punch

4.2%

2.6%

4.5%

5.4%

Broward

2.0

DRE

Punch

2.6

2.7

1.9

3.3

Dade

3.0

DRE

Punch

3.2

4.0

2.7

3.2

Duval

2.2

Scanner

Punch

3.4

3.1

2.5

4.5

Hillsborough

1.6

DRE

Punch

2.3

2.7

1.9

N/A

Palm Beach

2.3

DRE

Punch

3.1

3.7

2.3

3.3

Pinellas

1.9

DRE

Punch

2.2

2.3

1.9

2.3

Total

2.0

   

3.1

     

"These results are very encouraging," said Stephen Ansolabehere, a professor at the Massachusetts Institute of Technology and co-director of the project. "Florida made a major effort to upgrade its technology and, in the primary, the machines used showed clear gains over the technologies in past elections."

Professor Charles Stewart, another MIT professor working on the Voting Technology Project, cautions that "the success of an election cannot be measured solely in terms of equipment performance. Current events in Florida also illustrate how better technology is just a first step in improving the functioning of democracy." Stewart said, "Most of the problems reported by journalists covering the 2002 Primary Elections in Florida did not concern equipment malfunctions, but problems encountered preparing for election day, such as training poll workers."

R. Michael Alvarez, co-director of the Voting Technology Project and professor of political science at the California Institute of Technology, said "As counties and states across the country, especially here in California, plan out similar changes, we are learning important lessons about how to make such important changes in voting technologies."

"The one distressing thing, though, are the reports from Florida that polling place workers had difficulties getting some of the new voting machines up and running on election day in Florida, and that as a result, some voters might have been turned away from the polling places. These reports reinforce our calls for more polling place workers and better training of polling place workers, as they provide a critical role in making sure that all votes are counted," Alvarez said.

MIT's Stewart adds "The fact that the congressional election reform bill is currently stalled in a House-Senate conference committee hasn't helped matters any."

The Caltech/MIT Voting Technology Project is a non-partisan research project, formed to study election systems following the 2000 presidential election and sponsored by the Carnegie Corporation. More information and copies of reports are available at www.vote.caltech.edu.

### MEDIA CONTACT: Jill Perry Caltech Media Relations Director (626) 395-3226 jperry@caltech.edu

Sarah Wright or Ken Campbell MIT News Office 617 253-2700 shwright@mit.edu

Visit the Caltech media relations web site: http://pr.caltech.edu/media

Writer: 
JP
Writer: 

Caltech to be part of $15.5-million federal grantto understand how living cells communicate

A California Institute of Technology research group that specializes in distributed information systems has been named one of the collaborators in the Alpha Project, a $15.5-million, five-year program for advancing knowledge of how living cells respond to information and communicate with each other.

The Caltech research group is headed by Jehoshua Bruck, who is the Gordon and Betty Moore Professor of Computation and Neural Systems and Electrical Engineering at Caltech. Bruck will receive more than $1 million of the $15.5 million Alpha Project grant, which has been awarded by the National Institutes of Health's National Human Genome Research Institute to the Molecular Sciences Institute, which will oversee the coordinated effort. The Alpha Project will also involve research groups from MIT, UC Berkeley, and Pacific Northwest National Laboratory.

The aim of the Alpha Project is to "enable broad understanding of cellular and organismic behavior and work towards making predictive models of biological systems that will serve to radically improve researchers' abilities to understand biology by providing them with advanced methods and tools to probe important biological questions." Bruck's part of the work will be in the analysis, abstraction, and modeling of cellular signal transduction.

"Basically, signal transduction is the regulatory process that controls how a cell communicates with other cells, or senses things in its environment," Bruck explains. "We'll study this chemical signal processing in baker's yeast cells, which are very similar to human cells in the way they carry out signal transduction. Hence, yeast will serve as our 'model system.'"

The entire Alpha Project will be focused on studying the pheromone signal pathway in baker's yeast. This biological pathway involves a relatively small number of about 25 genes, so it is hoped that thoroughly understanding the system will provide new insights on how cells respond to stimuli and communicate in humans.

Bruck has interacted for the last two years with the Molecular Sciences Institute on creating computer algorithms for simulating biological regulatory systems, which is similar to the work he will do on the Alpha Project. "We are not planning to conduct experiments with yeast in my lab," he says. "Our part will be to model the whole process, and create simulations to try to predict the behavior of the biological system.

"Also, we plan to learn from biology about new principles in circuits for computation and communications, because at present, we simply don't know how to build artificial systems that compute, communicate, and evolve like biological cells."

Success for the overall mission of the Alpha Project will mean advances that could lead to new ways of dealing with diseases such as cancer and diabetes.

"At the least, we'll definitely understand this communication pathway in cells," Bruck says of the biological goals. "And if we are able to understand the mechanisms in a way that leads to advances in curing diseases, and this information can also be applied to engineering systems, it would be even better."

Much of Bruck's research focuses on distributed information systems, which he defines as "a system comprising more than a single entity, such as a group of computing devices that interact by a wired or wireless communication network."

The Alpha Project will be the flagship project at the Molecular Sciences Institute's new Center for Genomic Experimentation and Computation.

The Molecular Sciences Institute, headquartered in Berkeley, is an independent, nonprofit research laboratory that combines genomic experimentation with computer modeling. The mission of the institute is to predict the behavior of cells and organisms in response to defined genetic and environmental changes. Progress toward this goal will significantly increase our understanding of biological systems and help catalyze radical changes in how diseases are understood and treated.

In addition to the new $15.5-million funding from the National Human Genome Research Institute, the Molecular Science Institute is supported by other federal grants and funds provided by foundations and corporations. The institute's Web address is www.molsci.org.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Brain oscillations compress odor representations as signals pass through olfactory networks

Most natural smells are complex blends of many individual chemicals. Freshly ground coffee, for example, contains about 300 individual volatile components. A typical perfume also contains tens of ingredients, although the recipes are tightly locked in secret vaults.

The percepts that such complex blends evoke in us are, however, astonishingly singular: ground coffee smells like coffee, not like a hopeless mess of hundreds of ingredients; Gio or Allure also have unique signatures (often associated with other memories). This contrast between a physical object's complexity and the uniqueness of how we perceive it is the expression of what brains do best: "bind" features together into highly recognizable patterns.

This is as true of smell as it is for the other senses: a person can immediately recognize his mother's face or voice. How this useful and effortless compression of information is accomplished by the brain is one of the deep mysteries of neuroscience. In addition, understanding the mechanisms of compression would help in the design of computerized pattern recognizers (e.g., face recognition devices), a very difficult task with many important applications.

This issue is the subject of new research from neurobiologist Gilles Laurent and his team in the California Institute of Technology's computation and neural systems program. In a paper appearing in the July 19 issue of the journal Science, the Laurent team reports that the complicated wiring in grasshoppers between the antennal lobe (the insect analog of the olfactory bulb in humans) and the mushroom body (the insect analog of the olfactory cortex) is arranged and functions in such a way that highly detailed information in the former is bound or compressed for future memory use in the latter.

To better explain the details of their discovery, it's probably best to first explain the organs of smell in grasshoppers. The first area associated with smell are the receptor cells that are the front line of cells coming into contact with the chemical elements of a smell. Information from the receptor neurons converge in the antenna lobe, which, in grasshoppers, comprises about 1,000 individual neurons. The signal then goes to the mushroom body, which comprises about 50,000 neurons.

By intricately wiring glass, silicon, and platinum wire electrode arrays into the brains of hundreds of grasshoppers to record activity from their neurons, then exposing the insects to a variety of smells, the Laurent team has demonstrated some of the fine details of this wiring and its consequences for odor encoding.

When a specific odor is detected by a grasshopper, the antennal lobe neurons, wired to the peripheral detector array, start a complicated "dance" that engages about half of its 1,000 neurons. Each individual odor evokes a different dance or spatio-temporal pattern that involves partially overlapping subsets of neurons activated at varying times. Hence, determining from these patterns the odor's identity is a very difficult task; it requires that an observer decode the details of the dance, identify the correlations between the activities of all the neurons, and put all this back together into a coherent whole. Said differently, the informative value of any antennal lobe neuron in isolation is close to zero: valuable information comes only from deciphering the message carried by the population.

Population decoding is precisely what is done by the downstream neurons (called Kenyon cells, in the mushroom body). Those neurons, using a complicated combination of wiring, biophysical properties, brain oscillations, and loops of inhibition, manage to compress the information carried by many antennal lobe neurons into highly specific and sparse signals. Thus, individual Kenyon cells are silent most of the time and produce a signal only in response to very specific odors. The signals from these neurons, when given out, are thus highly informative, Laurent says.

"If you observe Kenyon cell No. 2,976 and see that it produced one single pulse, you can be pretty confident that the animal has just detected a certain odor mixture and not another," he explains. Each Kenyon cell thus has a very limited, but highly specific repertoire of "preferred stimuli."

At the same time, this compression eliminates much of the information about the individual chemical elements that make up an odor. "Knowing that Kenyon cell No. 2,976 fired may tell me that the (grasshopper) just smelled a cherry blend, but it tells me nothing about the chemical composition of that smell."

This may explain why these individual elements cannot be perceived; the encoding and decoding of an odor as a whole (cherry or Gio) is done at a cost: detail is lost. The advantage, however, is that the storage and retrieval of this odor's representation has become very simple, fast, and manageable: Each odor, however complex, is now represented by very few, highly specific neurons. Because the mushroom body has many neurons (and our olfactory cortex has even more), a huge number of such memories can be stored.

"There are many reasons to think that odor perception may work in similar ways in vertebrates, including humans," Laurent says, explaining that the antennal lobe in insects, including flies, is very similar to the mammalian olfactory bulb, except that it possesses many fewer cells; and that the mushroom body is likewise similar to the human olfactory cortex.

"In the case of humans as well as animals, the brain is not doing analytical chemistry by pulling out individual components," he says. "Instead, you have a very good memory for odors, however complex, even though you lose information about details. After all, throwing away information is one of the most important things that brains do, but it must be done carefully.

"In olfaction as well as in vision and the other senses, the brain must represent and memorize a huge number of complicated patterns. One should expect that evolution has found an optimal way of solving this task. Our work provides the beginnings of a solution, although whether it applies to other senses remains to be seen."

In addition to Laurent, the other authors of the study, all members of Caltech's Division of Biology, are Javier Perez-Orive, Ofer Mazor, Glenn C. Turner, Stijn Cassenaer, and Rachel I. Wilson.

The paper is available online at http://www.sciencemag.org.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Caltech geophysicists find four active volcanoes in Andes with innovative satellite radar survey

Four volcanoes in the central Andes mountains of South America, all previously thought to be dormant, must now be considered active due to ground motions detected from space, geophysicists say.

In a paper appearing in the July 11 issue of the journal "Nature", California Institute of Technology geophysics graduate student Matt Pritchard and his faculty adviser, Mark Simons, unveil their analysis of eight years of radar interferometry data taken on 900 volcanoes in the Andes. The data were gathered from 1992 to 2000 by the European Space Agency's two remote-sensing satellites, ERS 1 and ERS 2.

Of the four centers of activity, Hualca Hualca volcano in southern Peru is especially worth close observation because of the population density in the area and because it is just a few miles from the active Sabancaya volcano. A second volcano now shown to be active, Uturuncu in Bolivia, is bulging vertically about 1-to-2 centimeters per year, according to the satellite data, while a third, Robledo caldera in Argentina, is actually deflating for unknown reasons. A fourth region of surface deformation, on the border between Chile and Argentina, was unknown prior to the study, so the authors christened it "Lazufre" because it lies between the two volcanoes Lastarria and Cordon del Azufre.

While the study provides important new information about volcanic hazards in its own right, Pritchard, the lead author, says it also proves the mettle of a new means of studying ground deformation that should turn out to be vastly superior to field studies. The fact that none of the four volcanoes were known to be active—and thus probably wouldn't have been of interest to geophysicists conducting studies using conventional methods—shows the promise of the technique, he says.

"Achieving this synoptic perspective would have been an impractical undertaking with ground-based methods, like the GPS system," Pritchard says.

The sensitive data is superior to ground-based results in that a huge amount of subtle information can be accumulated about a large number of geological features. The satellites bounce a radar signal off the ground, and then accurately measure the time it takes the signal to return. On a later pass, when the satellite is again in approximately the same spot, it sends another signal to the ground.

If the two signals are out of phase, then the distance from the satellite to the ground is either increasing or decreasing, and if the features are volcanic, then the motion can be assumed to have been caused by movement of magma in the subsurface or by hydrothermal activity.

"You can think of a magma chamber as a balloon beneath the surface inflating and deflating. So if the magma is building up underground, you expect a swelling upward, and this is what we can detect with the satellite data."

Given the appropriate satellite mission, all the world's subaerial volcanoes could be easily monitored for active deformation on a weekly basis. Such a capability would have a profound impact on minimizing volcanic hazards in regions lacking necessary infrastructure for regular geophysical monitoring.

Another unusual finding from the study that shows its promise in better understanding volcanism is the Lascar volcano's lack of motion. Lascar has had three major eruptions since 1993, as well as several minor ones, and many volcanologists assume there should have been some ground swelling over the years of the study, Pritchard says.

"But we find no deformation at the volcano," he explains. "Some people find it curious, others think it's not unexpected. But it's a new result, and regardless of what's going on, it could tell us interesting things about magma plumbing."

There are several possible explanations to account for the lack of vertical motion at the Lascar volcano, Pritchard says. The first and most obvious is that the satellite passes took place at times between inflations and subsequent deflations, so that no net ground motion was recorded. It could also be that magma is somehow able to get from within Earth to the atmosphere without deforming surfaces at all; or that a magma chamber might be deep enough to allow an eruption without surface deformations being visible, even though deformation is occurring at depth.

The study is also noteworthy in that Simons and Pritchard were able to do their work without leaving their offices on the Caltech campus. The data analysis was done with software developed at Caltech and the Jet Propulsion Laboratory, and the authors say this software was critical to the study's success.

Simons, an assistant professor of geophysics at Caltech, and Pritchard are scheduled to attend a geophysics conference in Chile in October, and will try to see some or all of the four volcanoes at that time.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Researchers make progress in understanding the basics of high-temperature superconductivity

High-temperature superconductors have long been the darlings of materials science because they can transfer electrical current with no resistance or heat loss. Already demonstrated in technologies such as magnetic sensors, magnetic resonance imaging (MRI), and microwave filters in cellular-phone base stations, superconductors are potentially one of the greatest technological triumphs of the modern world if they could just be made to operate more reliably at higher temperatures. But getting there will probably require a much better understanding of the basic principles of superconductivity at the microscopic level.

Now, physicists at the California Institute of Technology have made progress in understanding at a microscopic level how and why high-temperature superconductivity can occur. In a new study appearing in the June 3 issue of Physical Review Letters, Caltech physics professor Nai-Chang Yeh and her colleagues report on the results of an atomic-scale microprobe revealing that the only common features among many families of high-temperature superconductors are paired electrons moving in tandem in a background of alternately aligned quantum magnets. The paper eliminates many other possibilities that have been suggested for explaining the phenomenon.

Yeh and her collaborators from Caltech, the Jet Propulsion Laboratory, and Pohang University of Science and Technology in Korea report on their findings on "strongly correlated s-wave superconductivity" in the simplest form of ceramic superconductors, which are based on copper oxides, or cuprates. The paper differentiates the behavior of the two basic types of high-temperature superconductors that have been studied since the mid-1980s—the "electron doped" type that contains added electrons in its lattice-work, and the "hole-doped" type that has open slots for electrons.

The cuprate materials were discovered to be superconductors in the 1980s, thereby instantaneously raising the temperature at which superconductivity could be demonstrated in the lab. This allowed researchers to produce devices that could be cooled to superconductivity with commonly available liquid nitrogen, which is used in a huge variety of industrial processes throughout the world. Before the high-temperature superconducting materials were discovered, experts could achieve superconductivity only by cooling the materials with liquid helium, which is much more expensive and difficult to make.

The arrival of high-temperature superconductivity heralded speculation on novel applications and machines, including virtually frictionless high-speed magnetically levitated trains, as well as power transmission at a fraction of the current cost. Indeed the progress of the 1980s led to demonstrations of technologies such as magnetic sensors, microwave filters, and small-scale electronic circuits that could potentially increase the speed of computers by many thousands of times.

A certain amount of progress has been made since the high-temperature superconductors were discovered, and researchers remain optimistic that even the current generation may be adequate for such futuristic devices as extremely high-speed computers, provided that other technological hurdles can be overcome. But a primary roadblock to rapid progress has been and continues to be a limited understanding of precisely how high-temperature superconductivity works at the microscopic level.

A better fundamental understanding would allow researchers better to determine which materials to use in applications, which manufacturing procedures to employ, and possibly how to design new cuprates with higher superconducting transition temperatures. This is important because researchers would have a better idea of the molecular architecture most essential to the desired properties.

In this sense, Yeh and her colleagues' new paper is a step toward a more fundamental understanding of the phenomenon. "The bottom line is that we can eliminate a lot of things people thought were essential for high-temperature superconductivity," she says. "I feel that we have narrowed down the possibilities for the mechanism."

More specifically, the type of cuprates investigated by the Caltech team has the simplest form among all cuprate superconductors, with a structure consisting of periodic stacks of one copper oxide layer followed by one layer of metal atoms. This structure differs from all other cuprates in that multiple layers of complex components between consecutive copper oxide layers are absent, and the latter are known to be the building blocks of high-temperature superconductivity.

This unique structure appears to have a profound effect on the superconducting properties of the cuprate, resulting in a more three-dimensional "s-wave pairing symmetry" for the tandem motion of electrons in the simplest cuprate, in contrast to the more two-dimensional "d-wave pairing symmetry" in most other cuprates. This finding eliminates the commonly accepted notion that d-wave pairing may be essential to the occurrence of high-temperature superconductivity.

Another new finding is the absence of the "pseudogap phenomenon," the existence of which would imply that electrons or holes could begin to form pairs at relatively high temperatures, although these pairs could not move in tandem until the temperature fell below the superconducting transition temperature. The pseudogap phenomenon is quite common in many cuprates, and physicists have long speculated that its existence may be of fundamental importance. The absence of pseudogap, as found in the simplest form of cuprates, can now effectively rule out theories for high-temperature superconductivity based on the pseudogap phenomenon.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Geophysicists Find Sharp-Sides to the African Superplume

Scientists at the California Institute of Technology have discovered that the African superplume-a massive, hot upwelling of rock beneath southern Africa-has edges that are sharp and distinct, not diffuse and blurred as previously thought. Such sharp, lateral boundaries have never been found in the Earth's mantle before, and they challenge scientist's understanding of the interior.

In a paper to be published in the June 7 issue of the journal Science, a team of geophysicists at Caltech's Seismological Laboratory used a fortuitous set of seismic waves from distant earthquakes to show that the boundary of the African superplume appears to be sharp, with a width of about 30 miles. The sharp boundary is not vertical but somewhat tilted, somewhat like a rising plume of smoke that is tilted by the wind. This suggests that the plume is unstable. Using dynamic computer modeling, the scientists provide further evidence of what they and other geologists suspected, that the superplume has a dense chemical core that differs from the scalding hot rock that comprises the surrounding mantle.

The team of scientists from Caltech includes Sidao Ni, the paper's lead author and a staff scientist in the seismology lab; graduate student Eh Tan; Michael Gurnis, professor of geophysics; and Don Helmberger, the Smits Family Professor of Geophysics and Planetary Science and director of the Caltech seismology lab.

About 20 years ago, scientists developed a way to make three-dimensional "snapshots" of the earth's interior using the seismic waves, or vibrations, that travel through the earth following an earthquake. By measuring the time it takes for these waves to travel from an earthquake's epicenter to a recording station, they can infer the temperatures and densities in a given segment of the mantle, the middle layer of the earth. In the mid-1980s, they noticed a huge area under Africa where seismic waves passed through slowly implying that the solid rock was at a substantially higher temperature.

Some 750 miles across and more than 900 miles tall, the region was initially thought to be a giant anomaly, with broad, diffuse edges, that was hotter than the mantle's surrounding rock. The so-called African superplume was slowly rising upwards, much like the thermal convection that occurs in a pot of boiling water. As seismic instrumentation improved, other evidence suggested that the structure might be more than thermal, possibly having a different chemical composition from the surrounding mantle rock.

If there were heavy and dense material associated with this anomalous mantle, the scientists reasoned, then it would either lie underneath or within the vast majority of the hot, rising African superplume. "So we said if that's the case, there should actually be a sharp boundary between the two materials, instead of a diffuse boundary," says Gurnis. The researchers went looking.

By pure chance, other unrelated work had placed a series of seismic detectors in southern Africa. This allowed the Caltech team to study and interpret the fine-scale structure of earthquake seismic waves recorded by the arrays. The energy from the earthquakes emanated from South America and passed through the African superplume.

It turned out, says Gurnis, that a clear pattern of waves developed that grazed the east edge of the plume, creating a peculiar pattern that was indicative of being an incredibly sharp boundary-a boundary that probably extends nearly 900 miles above the core. The findings startled the researchers. "No one expected this," says Gurnis. "Everybody thought there'd be these very broad, diffuse structures. Instead what we've found is a structure that is much bigger, much sharper, and extends further off the core mantle boundary."

They also found that the structure, instead of having a dome-like appearance predicted by their computer models, tilts toward the northeast. Gurnis speculates that's probably due to its dynamic state-"It's a completely different observation from what we expected to see," he says.

At this point, the team can only speculate on the causes. "One of the ideas, and it's not perfect, is that the rock composition of the plume is more iron rich, and thus denser," Gurnis suggests. "It will be interesting to see what observations other scientists can make. The idea of sharp, near-vertical edges was not on people's agendas before now, so this may change people's perspectives on the interior.

"I don't particularly like this idea," Gurnis admits; "it's strange. I guess that's why we find it so interesting."

The interdisciplinary team of researchers was funded by the National Science Foundation's Cooperative Studies of Earth's Deep Interior program.

 

Writer: 
MW
Exclude from News Hub: 
No

Researchers make plasma jets in the labthat closely resemble astrophysical jets

Astrophysical jets are one of the truly exotic sights in the universe. They are usually associated with accretion disks, which are disks of matter spiraling into a central massive object such as a star or a black hole. The jets are very narrow and shoot out along the disk axis for huge distances at incredibly high speeds.

Jets and accretion disks have been observed to accompany widely varying types of astrophysical objects, ranging from proto-star systems to binary stars to galactic nuclei. While the mechanism for jet formation is the subject of much debate, many of the proposed theoretical models predict that jets form as the result of magnetic forces.

Now, a team of applied physicists at the California Institute of Technology have brought this seemingly remote phenomenon into the lab. By using technology originally developed for creating a magnetic fusion configuration called a spheromak, they have produced plasmas that incorporate the essential physics of astrophysical jets. (Plasmas are ionized gases and are excellent electrical conductors; everyday examples of plasmas are lightning, the northern lights, and the glowing gas in neon signs.)

Reporting in an upcoming issue of the Monthly Notices of the Royal Astronomical Society, Caltech professor of applied physics Paul Bellan and postdoctoral scholar Scott Hsu describe how their work helps explain the magnetic dynamics of these jets. By placing two concentric copper electrodes and a coaxial coil in a large vacuum vessel and driving huge electric currents through hydrogen plasma, these scientists have succeeded in producing jet-like structures that not only resemble those in astronomical images, but also develop remarkable helical instabilities that could help explain the "wiggled" structure observed in some astrophysical jets.

"Photographs clearly show that the jet-like structures in the experiment form spontaneously," says Bellan, who studies laboratory plasma physics but chanced upon the astrophysical application when he was looking at how plasmas with large internal currents can self-organize. "We originally built this experiment to study spheromak formation, but it also dawned on us that the combination of electrode structure, applied magnetic field, and applied voltage is similar to theoretical descriptions of accretion disks, and so might produce jet-like plasmas."

The theory Bellan refers to states that jets can be formed when magnetic fields are twisted up by the rotation of accretion disks. Magnetic field lines in plasma are like elastic bands frozen into jello. The electric currents flowing in the plasma (jello) can change the shape of the magnetic field lines (elastic bands) and thus change the shape of the plasma as well. Magnetic forces associated with these currents squeeze both the plasma and its embedded magnetic field into a narrow jet that shoots out along the axis of the disk.

By applying a voltage differential across the gap between the two concentric electrodes, Bellan and Hsu effectively simulate an accretion disk spinning in the presence of a magnetic field. The coil produces magnetic field lines linking the two concentric electrodes in a manner similar to the magnetic field linking the central object and the accretion disk.

In the experiment an electric current of about 100 kiloamperes is driven through the tenuous plasma, resulting in two-foot-long jet-like structures traveling at approximately 90 thousand miles per hour. More intense currents cause a jet to become unstable so that it deforms into a theoretically predicted helical shape known as a kink. Even greater currents cause the kinked jets to break off and form a spheromak. The jets last about 5 to 10 millionths of a second, and are photographed with a special high-speed camera.

"These things are very scalable, which is why we're arguing that the work applies to astrophysics," Bellan explains. "If you made the experiment the size of Pasadena, for example, the jets might last one second; or if it were the size of the earth, they would last about 10 minutes. But obviously, that's impractical."

The importance of the study, Bellan and Hsu say, is that it provides compelling evidence in support of the idea that astrophysical jets are formed by magnetic forces associated with rotating accretion disks, and it also provides quantitative information on the stability properties of these jets.

The work was supported by a grant from the U.S. Department of Energy.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Astronomers discover the strongest known magnet in the universe

Astrophysicists at the California Institute of Technology, using the Palomar 200-inch telescope, have uncovered evidence that a special type of pulsar has the strongest magnetic field in the universe.

Reporting in the May 30 issue of the journal Nature, Caltech graduate student Brian Kern and his advisor Chris Martin report on the nature of pulses emanating from a faint object in the constellation Cassiopeia. Using a specially designed camera and the Palomar 200-inch telescope, the team discovered that a quarter of the visible light from the pulsar known as 4U0142+61 is pulsed, while only 3 percent of the X rays emanating from the object are pulsed, meaning that the pulsar must be an object known as a magnetar.

"We were amazed to see how strongly the object pulsed in optical light compared with X rays," said Martin, who is a professor of physics at Caltech. "The light had to be coming from a strong, rotating magnetic field rather than a disk of infalling gas."

To explain the precise chain of reasoning that led the team to their conclusion, a certain amount of explanation of the nature of stars and pulsars is in order. Normal stars are powered by nuclear fusion in their hot cores. When a massive star exhausts its nuclear fuel, its core collapses, causing a titanic "supernova" explosion.

The collapsing core forms a "neutron star" which is as dense as an atomic nucleus and the size of Los Angeles. The very weak magnetism of the original star is greatly amplified (a billion- to a trillion-fold) during the collapse. The slow rotation of the original star grows as well, just as an ice skater spins much faster when her arms are drawn in.

The combination of a strong magnetic field and rapid spin often produces a "pulsar," an object that rotates its beam of light just like a lighthouse, but usually in the radio band of the electromagnetic spectrum. Pulsars have been discovered that rotate almost one thousand times every second. In conventional pulsars that have been studied since their discovery in the 1960s, the source of the energy that produces this pulsing light is the rotation itself.

In the last decade, a new type of pulsar has been discovered that is very different from the conventional radio pulsar. This type of object, dubbed an "anomalous X-ray pulsar," has a very lazy rotation (one every 6 to 12 seconds) and pulses in the X- ray frequencies but is invisible in radio waves. However, the X-ray power is hundreds of times the power provided by their slow rotation. Their source of energy is unknown, and therefore "anomalous." One of the brightest of these pulsars is 4U0142+61, named for its sky coordinates and detection by the Uhuru X-ray mission in the 1970s.

Two sources of energy for the X rays are possible. In the first model, bits of gas blown off in the supernova explosion fall back onto the resulting neutron star, whose magnetic field is no stronger than an ordinary pulsar's. As the gas slowly falls (accretes) onto the surface, it becomes hot and emits X rays.

A second model, proposed by Robert Duncan (University of Texas) and Christopher Thompson (Canadian Institute for Theoretical Astrophysics), holds that anomalous X-ray pulsars are magnetars, or neutron stars with ultra-strong magnetic fields. The magnetic field is so strong that it can power the neutron star by itself, generating X rays and optical light. Magnetic fields power solar flares in our own sun, but with only a tiny fraction of the power of nuclear fusion. Magnetars would be the only objects in the universe powered mainly by magnetism.

"Scientists would be thrilled to investigate these enormous magnetic fields, if they exist," says Kern. "Identifying 4U0142+61 as a magnetar is the essential first step in these studies."

The missing observational clue to distinguish between these very different power sources was provided by a novel camera designed to look at optical light coming from very faint pulsars. While most of the light appears in X-ray frequencies, anomalous X-ray pulsars emit a small amount of optical light. In pulsars powered by disks of gas, optical pulsations would be a diluted byproduct of X-ray pulsations, which are weak in this pulsar. A magnetar, on the other hand, would be expected to pulse as much or more in optical light as in X ray frequencies.

The problem is that the optical light from the object is extremely faint, about the brightness of a candle sitting on the moon. Astronomical cameras designed to look at very faint stars and galaxies must take very long exposures, as long as many hours, in order to detect the faint light, even with a 200-inch telescope. But in order to detect pulsations that repeat every eight seconds, the rotation period of 4U0142+61, exposure times must be very short, less than a second.

Martin and Kern invented a camera to solve this problem. The camera takes 10 separate pictures of the sky during a single rotation of the pulsar, each picture for less than one second. The camera then shuffles the pictures back to their starting point, and re-exposes the same 10 pictures for the next pulsar rotation. This exposure cycle is repeated hundreds of times before the camera data is recorded. The final image shows the pulsar at 10 different points in its repetitive cycle. During the cycle, part of the image is bright while part is dim. The large optical pulsations seen in 4U0142+61 show that it must be a magnetar.

How strong is the magnetic field of this magnetar? It is as much as a quadrillion times the strength of the earth's magnetic field, and ten billion times as strong as the strongest laboratory magnet ever made. A toy bar magnet placed near the pulsar would feel a force of a trillion pounds pulling its ends into alignment with the pulsar's magnetic poles.

A magnetar would be an unsafe place for humans to go. Because the pulsar acts as a colossal electromagnetic generator, a person in a spacecraft floating above the pulsar as it rotated would feel 100 trillion volts between his head and feet.

The magnetism is so strong that it has bizarre effects even on a perfect vacuum, polarizing the light traveling through it. Kern and Martin hope to measure this polarization with their camera in the near future in order to measure directly the effects of this ultra-strong magnetism, and to study the behavior of matter in extreme conditions that will never be reproduced in the laboratory.

Additional information available at http://www.astro.caltech.edu/palomar/

Writer: 
Robert Tindol
Writer: 

Astrophysicists announce surprising discoveryof extremely rare molecule in interstellar space

A rare type of ammonia that includes three atoms of deuterium has been found in a molecular cloud about 1,000 light-years from Earth. The comparative ease of detecting the molecules means there are more of them than previously thought.

In a study appearing in the May 20 issue of the Astrophysical Journal Letters, an international team of astronomers reports on the contents of a molecular cloud in the direction of the constellation Perseus. The observations were done with the Caltech Submillimeter Observatory atop Mauna Kea in Hawaii.

The molecule in question is called "triply deuterated ammonia," meaning that each molecule is composed of a nitrogen atom and three deuterium atoms (heavy hydrogen), rather than the usual single nitrogen atom and three hydrogen atoms found in the typical bottle of household ammonia. While not unknown on Earth, the molecules, until recently, were thought by experts to be quite rare—so rare, in fact, that the substance was considered too sparse to even be detectable from Earth.

But now that scientists have detected triply deuterated ammonia in the interstellar medium, they're still wondering why they were able to do so at all, says Tom Phillips, a physics professor at the California Institute of Technology, director of the Caltech Submillimeter Observatory, and leader of the Caltech team. No other molecules containing three deuterium atoms have ever been detected in interstellar space.

"From simple statistics alone, the chances for all three hydrogen atoms in an ammonia molecule to be replaced by the very rare deuterium atoms are one in a million billion," Phillips explains. "This is like buying a $1 state lottery ticket two weeks in a row and winning a $30 million jackpot both weeks. Astronomical odds indeed!"

As for the reasons the molecules would exist in the first place, says Dariusz Lis, a senior research associate in physics at Caltech and lead author of the paper, the frigid conditions of the dense interstellar medium allow the deuterium replacement of the hydrogen atoms to take place. At higher temperatures, there would be a back-and-forth exchange of the deuterium atoms between the ammonia molecules and the hydrogen molecules also present in the interstellar medium. But at the frosty 10-to-20 degrees above absolute zero that prevails in the clouds, the deuterium atoms prefer to settle into the ammonia molecules and stay there.

The study is important because it furthers the understanding of the chemistry of the cold, dense interstellar medium and the way molecules transfer from grains of dust to the gas phase, Phillips explains. The researchers think the triply deuterated ammonia was probably kicked off the dust grains by the energy of a young star forming nearby, thus returning to the gas state, where it could be detected by the Caltech Submillimeter Observatory.

The study was made possible because of the special capabilities of the Caltech Submillimeter Observatory, a 10.4-meter telescope constructed and operated by Caltech with funding from the National Science Foundation. The telescope is fitted with the world's most sensitive submillimeter detectors, making it ideal for seeking out the diffused gases and molecules crucial to understanding star formation.

In addition to the Caltech observers, the team also included international members from France led by Evelyne Roueff and Maryvonne Gerin from the Observatoire de Paris, funded by the French CNRS, and astronomers from the Max-Planck-Institut fuer Radioastronomie in Germany.

The main Web site for the Caltech Submillimeter Observatory is at http://www.submm.caltech.edu/cso.

 

 

Writer: 
Robert Tindol
Writer: 

Cosmic Background Imager uncoversfine details of early universe

Cosmologists from the California Institute of Technology using a special instrument high in the Chilean Andes have uncovered the finest detail seen so far in the cosmic microwave background radiation (CMB), which originates from the era just 300,000 years after the Big Bang. The new images, in essence, are photographs of the cosmos before stars and galaxies existed, and they reveal, for the first time, the seeds from which clusters of galaxies grew.

The observations were made with the Cosmic Background Imager (CBI), which was designed especially to make fine-detailed high-precision pictures in order to measure the geometry of space-time and other fundamental cosmological quantities.

The cosmic microwave background (CMB) originated about 300,000 years after the Big Bang and it provides a crucial experimental laboratory for cosmologists to understand the origin and eventual fate of the universe because at that remote epoch matter had not yet formed galaxies and stars. Tiny density fluctuations at that time grew under the influence of gravity to produce all the structures we see in the universe today, from clusters of galaxies down to galaxies, stars and planets. These density fluctuations give rise to temperature fluctuations which are seen in the microwave background.

First predicted soon after World War II and first detected in 1965, the CMB arose when matter got cool enough for the electrons and protons to combine to form atoms, at which point the universe became transparent. Before this time the universe was an opaque fog because light couldn't travel very far before hitting an electron.

The CBI results released today provide independent confirmation that the universe is "flat." Also, the data yield a good measurement of the amount of the mysterious non-baryonic "dark matter" —which differs from the stuff everyday objects are made of—in the universe. The results also confirm that "dark energy" plays an important role in the evolution of the universe.

According to Anthony Readhead, the Rawn Professor of Astronomy at Caltech and principal investigator on the CBI project, "These unique high-resolution observations give a powerful confirmation of the standard cosmological model. Moreover this is the first direct detection of the seeds of clusters of galaxies in the early universe."

The flat universe and the existence of "dark energy" lend additional empirical credence to the theory of "inflation," which states that the universe grew from a tiny subatomic region during a period of violent expansion a split second after the Big Bang —a popular theory to account for troubling details about the Big Bang and its aftermath.

Because it sees finer details in the CMB sky, the CBI goes beyond the recent successes of the BOOMERANG and MAXIMA balloon-borne experiments and the DASI experiment at the South Pole.

The previous findings relied on a simple model which the higher resolution CBI observations have verified. If the interpretation were incorrect, it would require nature to be doubly mischievous to be giving the same wrong answers from observations on both large and small angular scales.

"We have been fortunate at CITA to work closely with Caltech as members of both the CBI and BOOMERANG teams to help analyze the cosmological implications of these exquisite high precision experiments," says Richard Bond, director of the Canadian Institute for Theoretical Astrophysics, " It is hard to imagine a more satisfying marriage of theory and experiment."

Given the radical nature of the results coming from cosmological observations, it is crucial that all aspects of cosmological theory be thoroughly tested. The fact that the CBI observations compared with others are at very different resolution, and that the various observations are made with widely differing techniques, at different frequencies, and cover different parts of the sky, and yet agree so well, gives great confidence to the findings.

The CBI hardware was designed primarily by Steven Padin, chief scientist on the project, while the software was designed and implemented by senior research associate Timothy Pearson and staff scientist Martin Shepherd. Postdoctoral Scholar Brian Mason and three graduate students, John Cartwright, Jonathan Sievers, and Patricia Udomprasert all played critical roles in the project.

The photons we see today with instruments like the CBI, the earlier COBE satellite, and the BOOMERANG, MAXIMA, and DASI experiments, have been traveling through the universe since first emitted from matter about 14 billion years ago.

The temperature differences observed in the CMB are so slight, only about one part in 100,000, that it has taken 37 years to get images with details as fine as these presented today. Though first detected with a ground-based antenna in 1965, the cosmic microwave background appeared to be quite smooth to earlier experimentalists due to the limitation of the instruments available to them. It was the COBE satellite in the early 1990s that first demonstrated slight variations in the cosmic microwave background. The celebrated COBE images were of the entire sky, but the details were many times larger than any known structures in the present universe.

The CBI and the DASI instrument of the University of Chicago, which is operating at the South Pole, are sister projects that share much commonality of design, both making interferometry measurements of extremely high precision.

The BOOMERANG experiment, led by Caltech's Goldberger Professor of Physics Andrew Lange, demonstrated the flatness of the universe two years ago. The BOOMERANG observations, together with observations from the MAXIMA and DASI experiments, not only indicated the geometry of the universe, but also bolstered the inflation theory via accurate measurements of many of the fundamental cosmological parameters. The combination of these previous results with those announced today covers a range of angular scales from about one-tenth of a moon diameter to about one hundred moon diameters, and this gives great confidence in the combined results.

The CBI is a microwave telescope array comprising 13 separate antennas, each about three feet in diameter, set up in concert so that the entire machine acts as an interferometer. The detector is located at Llano de Chajnantor, a high plateau in Chile at 16,700 feet, making it by far the most sophisticated scientific instrument ever used at such high altitudes. The telescope is so high, in fact, that members of the scientific team must each carry bottled oxygen to do the work.

In five separate papers submitted today to the Astrophysical Journal, Readhead and his colleagues at Caltech, together with collaborators from the Canadian Institute for Theoretical Astrophysics, the National Radio Astronomy Observatory, the University of Chicago, the Universidad de Chile, the University of Alberta, the University of California at Berkeley, and the Marshall Space Flight Center, report on observations of the cosmic microwave background they have obtained since the CBI began operation in January 2000. The images obtained cover three patches of sky, each about 70 times the size of the moon, but showing fine details down to only one percent the size of the moon.

The next step for Readhead and his CBI team is to look for polarization in the photons of the cosmic microwave background. This will be a two-pronged attack involving both the CBI and DASI instruments and teams in complementary observations, which will enable them to tie down the value of these fundamental parameters with significantly higher precision. Funds for the upgrade of the CBI to polarization capability have been generously provided by the Kavli Institute.

The CBI is supported by the National Science Foundation, the California Institute of Technology, and the Canadian Institute for Advanced Research, and has also received generous support from Maxine and Ronald Linde, Cecil and Sally Drinkward, Stanley and Barbara Rawn, Jr., and the Kavli Institute.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Pages

Subscribe to RSS - research_news