New Analysis of BOOMERANG Data Uncovers Harmonics of Early Universe

Cosmologists from the California Institute of Technology and their international collaborators have discovered the presence of acoustic "notes" in the sound waves that rippled through the early universe.

The existence of these harmonic peaks, discovered in an analysis of images from the BOOMERANG experiment, further strengthens results last year showing that the universe is flat. Also, the new results bolster the theory of "inflation," which states that the universe grew from a tiny subatomic region during a period of violent expansion a split second after the Big Bang.

Finally, the results show promise that another Caltech-based detector, the Cosmic Background Imager (CBI), located in the mountains of Chile, will soon detect even finer detail in the cosmic microwave background. Analysis of this fine detail is thought to be the means of precisely determining how slight fluctuations billions of years ago eventually resulted in the galaxies and stars we see today.

"We were waiting for the other shoe to drop, and this is it," says Andrew Lange, U.S. team leader and a professor of physics at Caltech. Lange was one of a group of cosmologists revealing new results on the cosmic microwave background at the American Physical Society's spring meeting April 29. Other presenters included teams from the DASI and MAXIMA projects.

The new results are from a detailed analysis of high-resolution images obtained by BOOMERANG, which is an acronym for Balloon Observations of Millimetric Extragalactic Radiation and Geophysics. BOOMERANG is an extremely sensitive microwave telescope suspended from a balloon that circumnavigated the Antarctic in late 1998. The balloon carried the telescope at an altitude of almost 37 kilometers (120,000 feet) for 10 and one-half days.

"The key to BOOMERANG's ability to obtain these new images is the marriage of a powerful new detector technology developed at Caltech and the Jet Propulsion Lab with the superb microwave telescope and cryogenic systems developed in Italy at ENEA, IROE/CNR, and La Sapienza," Lange says.

The images were published just one year ago, and the Lange team at the time reported that the results showed the most precise measurements to date of the geometry of space-time. The initial analysis revealed that the single detectable peak represented about a 1-degree expanse, which is precisely the size of large detail predicted by theorists if space-time is indeed flat. Larger peaks would have indicated that the universe is "closed" like a ball, doomed to eventually collapse in on itself, while smaller peaks would have indicated that the universe is "open," or shaped like a saddle, and would expand forever.

Cosmologists believe that the universe was created approximately 12 to 15 billion years ago in an enormous explosion called the Big Bang. The intense heat that filled the embryonic universe is still detectable today as a faint glow of microwave radiation that is visible in all directions. This radiation is known as the cosmic microwave background (CMB). Whatever structures were present in the very early universe would leave their mark imprinted as a very faint pattern of variations in brightness in the CMB.

The CMB was first discovered by a ground-based radio telescope in 1965. Within a few years, Russian and American theorists had independently predicted that the size and amplitude of structures that formed in the early universe would form what mathematicians call a "harmonic series" of structure imprinted on the CMB. Just as the difference in harmonic content allows us to distinguish between a piano and a trumpet playing the same note, so the details of the harmonic content imprinted in the CMB allow us to understand the detailed nature of the universe.

Detection of the predicted features was well beyond the technology available at the time. It was not until 1991 that NASA's COBE (Cosmic Background Explorer) satellite discovered the first evidence for structures of any sort in the CMB.

The BOOMERANG images are the first to bring the CMB into sharp focus. The images reveal hundreds of complex regions that are visible as tiny variations—typically only 100 millionths of a degree (0.0001 C)—in the temperature of the CMB. The new results, released today, show the first evidence for a harmonic series of angular scales on which structure is most pronounced.

The images obtained cover about 3 percent of the sky, generating so much data that new methods had to be invented before it could be thoroughly analyzed. The new analysis provides the most precise measurement to date of several of the parameters which cosmologists use to describe the universe.

The BOOMERANG team plans another campaign to the Antarctic in the near future, this time to map even fainter images encoded in the polarization of the CMB. Though extremely difficult, the scientific payoff of such measurements "promises to be enormous," maintains the U.S team leader of the new effort, John Ruhl, of the University of California at Santa Barbara. "By imaging the polarization, we may be able to look right back to the inflationary epoch itself—right back to the very beginning of time."

Data from the MAXIMA project is also being presented at the American Physical Society meeting, along with data from the CBI, which is also a National Science Foundation-supported mission. The CBI investigators, led by Caltech astronomy professor Tony Readhead, reported early results in the March 1 issue of the Astrophysical Journal. These results were in agreement with the finding of the other projects.

The 36 BOOMERANG team members come from 16 universities and organizations in Canada, Italy, the United Kingdom, and the United States. Primary support for BOOMERANG comes from the Italian Space Agency, Italian Antarctic Research Programme, and the University of Rome "La Sapienza" in Italy; from the Particle Physics and Astronomy Research Council in the United Kingdom; and from the National Science Foundation and NASA in the United States.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Scientists achieve breakthrough in fuel-cell technology

Embargoed for Release at 11 a.m. PST, Wednesday, April 18, 2001

PASADENA, Calif.—Gasoline averaging $3 per gallon? Oil drilling in an Alaskan wildlife reserve? A need to relax air quality standards? It seems the long-term future of fossil fuels is bleak. One promising solution scientists have been studying is fuel cells, but they've had limitations too. Now, in the April 19 issue of the science journal Nature, the California Institute of Technology's Sossina M. Haile reports on a new type of fuel cell that may resolve these problems.

Unlike the engines in our cars, where a fuel is burned and expanding gases do the work, a fuel cell converts chemical energy directly into electrical energy. Fuel cells are pollution free, and silent. The most common type now being developed for portable power—the type used in today's fuel-cell-powered prototype cars—is a polymer electrolyte fuel cell. An electrolyte is a chemical that can conduct electricity, and is at the heart of the fuel cell. Polymer electrolytes must be humidified in order for the fuel cell to function, can only operate over a limited temperature range, and are permeable. As a consequence, polymer electrolyte fuel cell systems require many auxiliary components and are less efficient than other types of fuel cells.

Haile, an assistant professor of materials science, has taken a completely different tack, developing an alternative type of fuel cell that is not a hydrated polymer, but is instead based on a so-called "solid acid." Solid acids are chemical compounds, such as KHSO4 (potassium hydrogen sulfate). Their properties are intermediate between those of a normal acid, such as H2SO4 (sulfuric acid), and a normal salt, such as K2SO4 (potassium sulfate). Solid acids can conduct electricity at similar values to polymers, they don't need to be hydrated, and they can function at high temperatures, up to 250 degrees Centigrade. Solid acids are also typically inexpensive compounds that are easy to manufacture.

But until now such solid acids have not been examined as fuel-cell electrolytes because they dissolve in water and can lose their shape at even slightly elevated temperatures. To solve these problems, Haile and her graduate students Dane Boysen, Calum Chisholm and Ryan Merle, operated the fuel cell at a temperature above the boiling point of water, and used a solid acid, CsHSO4, that is not very prone to shape changes.

The next challenge, says Haile, is to reduce the electrolyte thickness, improve the catalyst performance, and, most importantly, prevent the reactions that can occur upon prolonged exposure to hydrogen. Still, she says, solid acid fuel cells are a promising development.

"The system simplifications that come about (in comparison to polymer electrolyte fuel cells) by operating under essentially dry and mildly heated conditions are tremendous. While there is a great deal of development work that needs to be done before solid acid based fuel cells can be commercially viable, the potential payoff is enormous."

The Department of Energy, as part of its promotion of energy-efficient science research, recently awarded Haile an estimated $400,000 to continue her research in fuel cells. She also recently received the J.B. Wagner Award of the Electrochemical Society (High Temperature Materials Division). She is the recipient of the 2001 Coble Award from the American Ceramics Society, and was awarded the 1997 TMS Robert Lansing Hardy Award. Haile has received the National Science Foundation's National Young Investigator Award (1994–99), Humboldt Fellowship (1992–93), Fulbright Fellowship (1991–92), and AT&T Cooperative Research Fellowship (1986–92).

 

Writer: 
MW
Exclude from News Hub: 
No

Scientists Watch Dark Side of the Moon to Monitor Earth's Climate

Scientists have revived and modernized a nearly forgotten technique for monitoring Earth's climate by carefully observing "earthshine," the ghostly glow of the dark side of the moon.

Earthshine measurements are a useful complement to satellite observations for determining Earth's reflectance of sunlight (its albedo), an important climate parameter. Long-term observations of earthshine thus monitor variations in cloud cover and atmospheric aerosols that play a role in climate change.

Earthshine is readily visible to the naked eye, most easily during a crescent moon. Leonardo da Vinci first explained the phenomenon, in which the moon acts like a giant mirror showing the sunlight reflected from Earth. The brightness of the earthshine thus measures the reflectance of Earth.

In the current issue of the refereed journal Geophysical Research Letters, a team of scientists from the New Jersey Institute of Technology and the California Institute of Technology report on earthshine observations showing that Earth's albedo is currently 0.297, give or take 0.005.

In the early 20th century, the French astronomer André-Louis Danjon undertook the first quantitative observations of earthshine. But the method lay dormant for nearly 50 years, until Caltech team leader and professor of theoretical physics Steven E. Koonin coauthored a paper in 1991 describing its modern potential. The present data are the first that are precise and systematic enough to infer the relative health of Earth's climate.

"Earth's climate is driven by the net sunlight that it absorbs," says Philip R. Goode, leader of the New Jersey Institute of Technology team, Director of the Big Bear Solar Observatory, and a Distinguished Professor of Physics at NJIT. "We have found surprisingly large—up to 20 percent—seasonal variations in Earth's reflectance. Further, we have found a hint of a 2.5-percent decrease in Earth's albedo over the past five years."

A 2.5-percent change in reflectance may not seem like much, but if Earth reflected even 1 percent less light, the effect would be significant enough to be a concern when studying global warming.

Koonin notes that "studies of climate change require well-calibrated, long-term measurements of large regions of the globe. Earthshine observations are ideally suited to this, because, in contrast to satellite determinations of the albedo, they are self-calibrating, easily and inexpensively performed from the ground, and instantaneously cover a significant fraction of the globe."

The new albedo measurements are based on about 200 nights of observations of the dark side of the moon at regular intervals over a two-year period, and another 70 nights during 1994-95. Using a 6-inch refractor telescope and precise CCD at the Big Bear Solar Observatory, the researchers measure the intensity of the earthshine.

By simultaneously observing the bright "moonshine" from the crescent, they compensate for the effects of atmospheric scattering. The data are best collected one week before and one week after the new moon, when less than half of the lunar disk is illuminated by the sun. When local cloud cover is also taken into account, Earth's reflectance can be determined on about one-quarter of the nights.

The study relies on averages over long periods because the albedo changes substantially from night to night with changing weather, and even more dramatically from season to season with changing snow and ice cover. The locations of the land masses also affect the albedo as Earth rotates on its axis. For example, the observations from California easily detect a brightening of the earthshine during the night as the sun rises over Asia, because the huge continental land mass reflects more light than the Pacific Ocean.

"Thus, the averaging of lots of data is necessary for an accurate indication of a changing albedo," Goode says.

It is significant that the earthshine data suggest that the albedo has decreased slightly during the past five years since the sun's magnetic activity has climbed from minimum to maximum during that time. This supports the hypothesis that the sun's magnetic field plays an indirect role in Earth's climate. If supported by further observations, it would explain why there seem to be so many signatures of the sun's 11-year activity cycle in Earth's climate record, but the associated variations in the Sun's brightness are too weak to have an effect.

The researchers plan continuing observations from Big Bear. "These, supplemented with additional observations from a planned worldwide network, will allow even more precise, round-the-clock monitoring of the earth's reflectance," Goode says. "That precision will also make it possible to test connections between solar activity and Earth's climate."

"It's really amazing, if you think about it," Koonin says, "that you can look at this ghostly reflection on the moon and measure what Earth's climate is doing."

The study was funded by both NASA, beginning in 1998, and the Western Center for Global Environmental Change, during 1994-95. Beyond Goode, other members of the NJIT team are Jiong Qiu, Vasyl Yurchyshyn, and Jeff Hickey. Beyond Koonin, other members of the Caltech team are C. Titus Brown, Edwin Kolbe (now at the University of Basel), and Ming Chu (now at the Chinese University of Hong Kong).

Writer: 
Robert Tindol
Writer: 

Owls perform a type of multiplicationin locating ground prey in dark, study shows

Owls have long been known for their stunning ability to swoop down in total darkness and grab unsuspecting prey for a midnight snack.

In the April 13 issue of the journal Science, neuroscientists from the California Institute of Technology report that an owl locates prey in the dark by processing two auditory signal cues to "compute" the position of the prey. This computation takes place in the midbrain and involves about a thousand specialized neurons.

"An owl can catch stuff in the dark because its brain determines the location of sound sources by using differences in arrival time and intensity between its two ears," says Mark Konishi, who is Bing Professor of Behavioral Biology at Caltech and coauthor of the Science paper.

For example, if a mouse on the ground is slightly to the right of a flying owl, the owl first hears the sound the mouse makes in its right ear, and a fraction of a second later, in its left ear. This information is transmitted to the specialized neurons in the midbrain.

Simultaneously, the owl's ears also pick up slight differences in the intensity of the sound. This information is transmitted to the same neurons of the midbrain, where the two cues are multiplied to provide a precise two-dimensional location of the prey.

"What we did not know was how the neural signals for time and intensity differences were combined in single neurons in the map of auditory space in the midbrain," Konishi says. "These neurons respond to specific combination of time and intensity differences. The question our paper answers is how this combination sensitivity is established."

"The answer is that these neurons multiply the time and intensity signals," he says.

Thus, the neurons act like switches. The neurons do not respond to time or intensity alone, but to particular combinations of them.

The reason the neural signals are multiplied rather than added is that, in an addition, a big input from the time pathway alone might drive the neuron to the firing level. In a multiplication, however, this possibility is less likely because a multiplication reduces the effects of a big input on one side.

It's not clear how the owl perceives the location of the mouse in the third dimension, Konishi says, but it could be that the owl simply remembers how far it is to the ground or how much noise a mouse generally makes, and somehow adds this information into the computation.

The lead author of the Science paper is José Luis Peña, a senior research fellow in biology at Caltech.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Seventy Percent of Americans Think Bush'sTax Plans Mainly Benefit Wealthy, Study Shows

Seven in 10 Americans think the Bush administration's proposed tax cuts would mainly benefit the wealthiest taxpayers, according to a national poll conducted by the University of Southern California and the California Institute of Technology's joint Center for the Study of Law and Politics (CSLP).

The study, which also revealed deep divisions along political and gender lines about how much of a tax cut should be enacted, shows that 29 percent of the public believe Bush's proposals would be of most benefit to the wealthiest 1 percent of Americans. An additional 41 percent believe the wealthiest 10 percent would be the beneficiaries.

Yet, in polling conducted while the Senate was considering President Bush's $1.6 trillion tax-cut proposal, 44 percent agreed with the president that the size of the cut was "just right"; 41 percent agreed with the majority of the Senate, who sliced the proposal by some $400 billion, that the President's cuts were "too big." Fifteen percent saw President Bush's tax cut as "too small."

"To see so much ambivalence in the American public about tax cuts shows how much work President Bush still has to do," noted Professor R. Michael Alvarez, a political scientist from Caltech associated with the CSLP, and one of the principal investigators of this study.

"It's clear that even though this was the centerpiece issue of his presidential campaign, President Bush has not closed the deal yet on his $1.6 trillion tax cut," Alvarez said.

Professor Edward McCaffery of the USC Law School and Caltech, director of CSLP, and another investigator on the study, noted that tax-cut fever is catching. "Everyone wants something from Uncle Sam, even if they know that others will get more."

Most respondents said they favored income tax cuts across the board (29 percent), followed by elimination of the marriage penalty (17 percent), Social Security or cuts in Medicare taxes (13 percent), elimination of the estate or death tax (11 percent), or new tax credits for retirement accounts (10 percent). Eight percent said they favored no tax cut at all or had no opinion on the matter.

But on this specific breakdown, as on other issues that CSLP is examining, McCaffery noted a pronounced gender gap: "When it comes to tax cuts, men are far more likely to have an opinion, and a positive one; women are less certain about the whole deal."

As an indication of the political fights facing President Bush, Alvarez pointed to the partisan conflict underlying public opinion about the tax cut: "62 percent of Republicans saw the Bush tax cut as `just right,' while 60 percent of Democrats said it is `too large.'" Importantly, survey respondents who said they were independent were divided, with 42 percent saying the Bush tax cuts were `too large' and 42 percent saying they were 'just right,' Alvarez added.

The national probability telephone survey was conducted between March 26 and April 6. Interviews were conducted by Interview Services of America, Inc. Fifteen hundred American adults were interviewed, with a margin of error for the sample of +/- 2.5 percent.

 

Writer: 
Robert Tindol
Writer: 

Distant Massive Explosion Reveals a Hidden Stellar Factory

A gamma-ray burst detected in February has led astronomers to a galaxy where the equivalent of 500 new suns is being formed each year.

The discovery of a new "starburst galaxy," made by researchers from the National Radio Astronomy Observatory and the California Institute of Technology, provides support for the theory that gamma-ray bursts are caused by exploding young massive stars. Details of the discovery are being presented today at the Gamma 2001 conference.

"This is a tremendously exciting discovery, since gamma-rays can penetrate dusty veils, and thus gamma-ray bursts can be used to locate the hitherto difficult-to-study dust galaxies at high redshifts," says Fiona Harrison, assistant professor of physics and astronomy at Caltech. "Gamma-ray bursts may offer us a new way to study how stars are formed in the early universe."

Radiation from this gamma-ray burst was first detected in the constellation Bootes by the Italian satellite observatory BeppoSAX on Feb. 21. Within hours, astronomers worldwide received the news of the burst and began looking for a visible light counterpart. The burst was one of the brightest recorded in the four years BeppoSAX has been standing watch.

Gamma-ray bursts were first detected by satellites monitoring the Nuclear Test Ban Treaty in the 1970s, and were thought for many years to represent relatively modest outbursts on nearby neutron stars. The events have now been shown to originate in the farthest reaches of the universe. They produce, in a matter of seconds, more energy than the sun will generate in its entire 10-billion-year lifetime, and represent the most luminous explosions in the cosmos.

After the February event, astronomers at the U.S. Naval Observatory discovered the visible light counterpart, pinpointing the location of the event. An international collaboration, led by Dale Frail of the National Radio Astronomical Observatory and Harrison and Shri Kulkarni of Caltech, conducted a variety of observations using the Hubble Space Telescope, the Very Large Array radio telescope, the Chandra X-ray Observatory, Institut de RadioAstronomie Millimétrique (IRAM), and the James Clerk Maxwell telescope (JCMT).

Pivotal to the detection of the starburst galaxy was the latter telescope, which sits high atop Mauna Kea in Hawaii and is designed to make measurements at the shortest radio wavelengths capable of penetrating Earth's atmosphere, called the "submillimeter" portion of the spectrum. Only five and a half hours after the first sighting, a submillimeter source was found at the burst location.

Astronomers had expected to see a rapidly brightening signal with JCMT, a sign that the shock generated by the burst was moving through the dense gas surrounding the burst. Instead, much to everyone's surprise, the signal stayed constant, never varying during this time.

Furthermore, observations conducted at a slightly lower frequency by observers on the IRAM telescope in Southern Spain showed a much fainter source, strongly suggesting that the submillimeter observation was not simply detecting the afterglow of the explosion.

"The simplest explanation is that we have detected the light from the host galaxy of the burst," says Frail, explaining that it is rare to detect galaxies at submillimeter wavelengths. Only about one in every thousand that are visible with optical telescopes are observed by short-wavelength radio telescopes.

Astronomers in Arizona found the gamma-ray burst to lie roughly 8 billion light-years from Earth. This was also confirmed almost simulataneously by the Caltech group using one of the 10-meter Keck telescopes on Mauna Kea. The light we see from it shows the galaxy when the universe was less than half its present age. At this distance, the observed submillimeter brightness implies a prodigious rate of star formation—roughly 500 solar masses of material must be turning into stars each year, meaning that one or two new stars shine forth each day. The galaxy in which the burst occurred, then, may provide a glimpse of what the Milky Way looked like in its youth.

Previous searches for starburst galaxies in the distant universe have been hampered by the imprecise positions current submillimeter telescopes provide, and by the obscuring dust and gas that largely hides such galaxies from the view of optical telescopes. Observers led by Kulkarni used the Hubble Space Telescope to observe the fading embers of the explosion, but the underlying galaxy seems ordinary as seen in visible light, since most of this light is likely absorbed by dust and converted into submillimeter radiation. Had the galaxy been observed only in the optical wavelengths, astronomers would not have guessed that so many stars were being formed in it.

The discovery of a bright gamma-ray burst in a starburst system is exciting for two reasons: it strongly supports one model for the bursts themselves—the explosive destruction of a young, massive star—and it suggests a new way to locate such galaxies. With their enormous penetrating power, the energetic gamma rays punch right through the dusty veil, pinpointing the location of vigorous star-forming activity.

By following up on the hundreds of bursts that will be detected in the next few years, astronomers will be able to collect an unbiased sample of starburst galaxies at different distances in time and space, enabling them to explain the star-formation history of the universe.

The members of the Caltech team also include: Prof. S. George Djorgovski; postdoctoral fellows Derek Fox, Titus Galama, Daniel Reichart, Re'em Sari and Fabian Walter; and graduate students Edo Berger, Joshua Bloom, Paul Price, and Sarah Yost. Astronomers from several other institutions are also involved in the collaboration.

Writer: 
RT
Writer: 

Life rebounded quickly after collision 65 million years ago that wiped out dinosaurs

Though the dinosaurs fared poorly in the comet or meteor impact that destroyed two-thirds of all living species 65 million years ago, new evidence shows that various other forms of life rebounded from the catastrophe in a remarkably short period of time.

In the March 9 issue of the journal Science, a team of geochemists reports that life was indeed virtually wiped out for a period of time, but then reappeared just as abruptly only 10,000 years after the initial collision. Further, the evidence shows that the extinctions 65 million years ago, which mark the geologic time known as the Cretaceous-Tertiary (K-T) boundary, were most likely caused by a single catastrophic impact.

"There's been a longstanding debate whether the mass extinctions at the K-T boundary were caused by a single impact or maybe a swarm of millions of comets," says lead author Sujoy Mukhopadhyay, a graduate student at Caltech. "In addition, figuring out the duration of the extinction event and how long it took life to recover has been a difficult problem."

To address both questions, Mukhopadhyay and his colleagues measured the amount of cosmic dust in the sediments of an ancient sea bed which is now exposed on land about 100 miles north of Rome. In particular they focused on a two-centimeter-thick clay deposit that previously had been dated to about 65 million years ago. The base of this clay deposit corresponds to the date of the extinction event.

The clay deposit lies above a layer of limestone sediments, which are essentially the skeletons of microscopic sea life that settled at the bottom of the ancient sea. The limestone deposit also contains a certain percentage of clay particles, which result from erosion on the continents. Finally, mixed in the sediments is extraterrestrial dust that landed in Earth's oceans and then settled out. This dust carries a high concentration of helium-3 (3He), a rare isotope of helium that is depleted on Earth but highly enriched in cosmic matter.

The lower limestone layer abruptly ends at roughly 65 million years, since the organisms in the ocean were suddenly wiped out by the impact event. Thus, the layer immediately above the limestone contains nothing but the clay deposits and extraterrestrial dust that continued to settle at the bottom of the ancient sea. Immediately above the two-centimeter clay deposit is another layer of limestone deposits from microorganisms of the sea that eventually rebounded after the catastrophe.

In this study, the researchers measured the amount of 3He in the sediments to learn about the K-T extinction. They reasoned that a gigantic impact would not change the amount of 3He in the clay deposit. This is because large impacting bodies are mostly vaporized upon impact and release all their helium into the atmosphere. Because helium is a light element, it is not bound to Earth and tends to drift away into space. Therefore, even if a huge amount were brought to Earth by a large impact, the 3He would soon disappear and not show up in the sedimentary layers.

In contrast, 3He brought to Earth by extraterrestrial dust tends to stay trapped in the dust and not be lost to space, says Kenneth Farley, professor of geochemistry at Caltech and coauthor of the paper. So 3He found in the limestone and the clay deposits came from space in the form of dust.

Based on the 3He record obtained from the limestones, the researchers eliminated the possibility that a string of comets had caused the K-T extinctions. Comets are inherently dusty, so a string of them hitting Earth would have brought along a huge amount of new dust, thereby increasing the amount of 3He in the lower limestone deposit.

But the Italian sediment showed a steady concentration of 3He until the time of the impact, eliminating the possibility of a comet swarm. In fact, the researchers found no evidence for periodic comet showers, which have been suggested as the cause of mass extinction events on Earth.

Mukhopadhyay and his colleagues reason that because the "rain-rate" of the extraterrestrial dust from space did not change across the K-T boundary, the 3He concentration in the clay is proportional to the total depositional time of the clay. "It's been difficult to measure the time it took for this two-centimeter clay layer to be deposited," says Farley.

The researchers conclude that the two-centimeter clay layer was deposited in approximately 10,000 years. Then, very quickly, the tiny creatures that create limestone deposits reemerged and again began leaving their corpses on the ocean bed. The implication is that life can get started again very quickly, Farley says.

Thus the study answers two major questions about the event that led to the extinction of the dinosaurs, says Mukhopadhyay. In addition to Mukhopadhyay and Farley, the paper is also authored by Alessandro Montanari of the Geological Observatory in Apiro, Italy.

Writer: 
Robert Tindol
Writer: 

Researchers progress toward mutating a mousefor studying Parkinson's disease

Some inventors hope to build a better mousetrap, but California Institute of professor of biology Henry Lester's grand goal is to build a better mouse.

Not that the everyday laboratory mouse is inappropriate for a vast variety of biological and biomedical research. But for Parkinson's disease research, it has become clear that a strain of mutant mice with "slight" alterations would be a benefit in future medical studies. And not only would the mutant mice be useful for Parkinson's, but also for studies of anxiety and nicotine addiction.

Though Lester and his colleagues Johannes Schwarz and Cesar Labarca have not yet produced the mouse they envision, they have already achieved encouraging results by altering the molecules that form the receptors for nicotine in the mouse's brain. If they can just make these receptors overly sensitive in the right amount, they reason, the mice will develop Parkinson's disease after a few months of life.

Two earlier strains of mice were not ideal, but nonetheless convinced the Lester team members they were on the right track. One strain of mice suffered from nerve-cell degeneration too quickly, developing ion channels that opened literally before birth. These overly sensitive receptors essentially short-circuited some nerve cells. These mice usually do not survive birth, and never live long enough to reproduce.

Another strain developed modest nerve-cell degeneration in about a year, which is a long time in a mouse's life as well as a long time for a research project to wait for its test subjects. Lester wants the "Goldilocks mouse," with neurons that die "not before birth—that's too fast. Not at a year—that's too slow and incomplete. With a mouse strain that degenerates in three months, we could generate and test hypotheses several times per year."

Though they haven't achieved the "Goldilocks mouse" yet, the strain of mice developing modest degeneration after a year is particularly interesting. Tests showed that they were quite anxious, but tended to be calmed down by minuscule doses of nicotine. For reasons not entirely understood, humans who smoke are less likely to develop Parkinson's disease later in life, pointing to the likelihood that a mouse with hypersensitive nicotine receptors will be a good model for studying the disease.

In fact, the Lester team originally set out to build the strain of mice in order to study nicotine addiction and certain psychiatric diseases that might involve acetylcholine, a natural brain neurotransmitter that is mimicked by nicotine. The work in the past has been funded by the California Tobacco-Related Disease Research Program, the National Institute of Mental Health, and the National Institute of Neurological Disorders and Stroke (NINDS).

Once they had some altered mice, Schwarz (a neurologist who works with many Parkinson's patients) realized that the dopamine-containing nerve cells were dying fastest. The death of these cells is also a cause of Parkinson's disease. Because present mouse models for Parkinson's research are unsatisfactory, the researchers applied for and soon received funding from the National Parkinson Foundation, Inc. (NPF). Not only did the researchers receive the funding from the NPF, but they also were named recipients of the Richard E. Heikkila Research Scholar Award, which is presented for new directions in Parkinson's research.

"The Heikkila award is gratifying recognition for our new attempts to develop research at the intersection of clinical neuroscience and molecular neuroscience here at Caltech," says Lester.

Dr. Yuan Liu, program director at NINDS, says the Lester team's research is important not only because it is the first genetic manipulation of an ion channel that might lead to a mammalian model for Parkinson's disease, but also because the research is a pioneering effort in an emerging field called "channelopathy."

"Channelopathy addresses defects in ion channel function that causes diseases," Liu says. "Dr. Lester is one of the pioneers working in this field.

"We're excited about this development," she says, "because Parkinson's is a disease that affects such a large number of people—500,000 in the US. The research on Parkinson's is one of the research highlights that the NINDS is addressing."

The first results of the Lester team's research are reported in the current issue of the journal Proceedings of the National Academy of Sciences (PNAS).

In addition to Labarca, a member of the professional staff in the Caltech Department of Biology, and Schwarz, a visiting associate, the collaborators include groups led by professors James Boulter of UCLA and Jeanne Wehner of the University of Colorado.

Writer: 
RT
Writer: 

Odor recognition is a patterned, time-dependent process, research shows

PASADENA, Calif.-When Hamlet told the courtiers they would eventually "nose out" the hidden corpse of Polonius, he was perhaps a better neurobiologist than he realized. According to research by neuroscientists at the California Institute of Technology, the brain creates and uses subtle temporal codes to identify odors.

This research shows that the signals carried by certain neuron populations change over the duration of a sniff such that one first gets a general notion of the type of odor. Then, the wiring between these neurons performs work that leads to a more subtle discrimination, and thus, a precise recognition of the smell.

In the February 2 issue of the journal Science, Caltech biology and computation and neural systems professor Gilles Laurent and his colleague, postdoctoral scholar Rainer W. Friedrich, now at the Max Planck Institute in Heidelberg, Germany, report that the neurons of the olfactory bulb respond to an odor through a complicated process that evolves over a brief period of time. These neurons, called mitral cells because they resemble miters, thepointed hats worn by bishops, are found by the thousands in the olfactory bulb of humans.

"We're interested in how ensembles of neurons encode sensory information," explains Laurent, lead author of the study. "So we're less interested in where the relevant neurons lie, as revealed by brain mapping studies, than in the patterns of firing these neurons produce and in figuring out from these patterns how recognition, or decoding, works."

The researchers chose to use zebrafish in the study because these animals have comparatively few mitral cells and because much is already known about the types of odors that are behaviorally relevant to them. The Science study likely applies to other animals, including humans, because the olfactory systems of most living creatures appear to follow the same basic principles.

After placing electrodes in the brain of individual fish, the researchers subjected them sequentially to 16 amino-acid odors. Amino acids, the components of proteins, are found in the foods these fish normally go after in their natural environments.

By analyzing the signals produced by a population of mitral cells in response to each one of these odors, the researchers found that the information they could extract about the stimulus became more precise as time went by. The finding was surprising because the signals extracted from the neurons located upstream of the mitral cells, the receptors, showed no such temporal evolution.

"It looks as if the brain actively transforms static patterns into dynamic ones and in so doing, manages to amplify the subtle differences that are hard to perceive between static patterns," Laurent says.

"Music may provide a useful analogy. Imagine that the olfactory system is a chain of choruses-a receptor chorus, feeding onto a mitral-cell chorus and so on-and that each odor causes the receptor chorus to produce a chord.

"Two similar odors evoke two very similar chords from this chorus, making discrimination difficult to a listener," Laurent says. "What the mitral-cell chorus does is to transform each chord it hears into a musical phrase, in such a way that the difference between these phrases becomes greater over time. In this way, odors that, in this analogy, sounded alike, can progressively become more unique and more easily identified."

Applied to our own experience, this result could be described as follows: When we detect a citrus smell in a garden, for example, the odor is first conveyed by the receptors and the mitral cells. The initial firing of the cells allows for little more than the generic detection of the citrus nature of the smell.

Within a few tenths of a second, however, this initial activity causes new mitral cells to be recruited, leading the pattern of activity to change rapidly and become more unique. This quickly allows us to determine whether the citrus smell is actually a lemon or an orange.

However, the individual tuning of the mitral cells first stimulated by the citrus odor do not themselves become more specific. Instead, the manner in which the firing patterns unfold through the lateral circuitry of the olfactory bulb is ultimately responsible for the fine discrimination of the odor.

"Hence, as the system evolves, it loses information about the class of odors, but becomes able to convey information about precise identity," says Laurent. This study furthers progress toward understanding the logic of the olfactory coding.

Writer: 
Robert Tindol
Writer: 

Caltech/MIT Issue Voting Technology Report to Florida Task Force

PASADENA, Calif.- The Caltech/MIT Voting Technology Project has submitted a preliminary report to the task force studying the election in Florida. Their nationwide study of voting machines offers further evidence supporting the task force's call to replace punch card voting in Florida. The statistical analysis also uncovered a more surprising finding: electronic voting, as currently implemented, has performed less well than was widely believed.

The report examines the effect of voting technologies on unmarked and/or spoiled ballots. Researchers from both universities are collaboratively studying five voting technologies: paper ballots with hand-marked votes, lever machines, punch cards, optical scanning devices, and direct-recording electronic devices (DREs), which are similar to automatic teller machines.

The study focuses on so-called "undervotes" and "overvotes," which are combined into a group of uncounted ballots called "residual votes." These include ballots with votes for more than one candidate, with no vote, or that are marked in a way that is uncountable.

Careful statistical analysis shows that there are systematic differences across these technologies, and that paper ballots, optical scanning devices, and lever machines have significantly lower residual voting rates than punch-card systems and DREs. Overall, the residual voting rate for the first three systems averages about 2 percent, and for the last two systems averages about 3 percent. This study is the most extensive analysis ever of the effects of voting technology on under- and overvotes. The study covers the entire country for all presidential elections since 1988, and examines variations at the county level. When the study is complete, it will encompass presidential elections going back to 1980, and will examine a finer breakdown of the different technologies, and a breakdown of residual votes into its two components: over- and undervotes. A final report will be released in June. The Voting Project was the brainchild of California Institute of Technology president David Baltimore in collaboration with Massachusetts Institute of Technology president Charles Vest. It was announced in December 2000 and faculty from both campuses immediately began collecting data and studying the range of voting methods across the nation in the hope of avoiding in the future the vote-counting chaos that followed the 2000 presidential election.

The analysis is complicated by the fact that voting systems vary from county to county and across time. When a voting system is switched, say from lever machines to DREs, the number of residual votes can go up due to voter unfamiliarity with the new technology.

"We don't want to give the impression that electronic systems are necessarily inaccurate, but there is much room for improvement," said Thomas Palfrey, Caltech professor of economics and political science.

"Electronic voting technology is in its infancy and seems the most likely one to benefit significantly from new innovations and increased voter familiarity," states the 11-page report.

Caltech Contact: Jill Perry Media Relations (626) 395-3226 jperry@caltech.edu

MIT Contact: Kenneth Campbell News Office (617) 253-2700 kdc@mit.edu

Writer: 
JEP

Pages

Subscribe to RSS - research_news