Caltech biologists invent newer, better methodfor making transgenic animals

Using specially prepared HIV-derived viruses stripped of their disease-causing potential, California Institute of Technology biologist David Baltimore and his team have invented a new method of introducing foreign DNA into animals that could have wide-ranging applications in biotechnology and experimental biology.

The Baltimore team reports, on today's Science Express Web site, on their study of single-cell mouse embryos that have been virally infected in a manner that leaves a new gene from a jellyfish permanently deposited into their genomes. The mice, after they have been carried to term, carry at least one copy of the gene in 80 percent of the cases, and 90 percent of these show high levels of the jellyfish protein. Further, the study shows that the offspring of the mice inherit the genes and make the new protein. Thus the method makes transgenic mice that have new genetic potential.

According to Baltimore, who is president of Caltech, the use of the HIV-like viruses could prove far superior to the current method of producing transgenic animals by pronuclear injection.

"It's surprising how well it works," says Baltimore, whose Nobel Prize-winning research on the genetic mechanisms of viruses 30 years ago is central to the new technique. "This technique is much easier and more efficient than the procedure now commonly in use, and the results suggest that it can be used to generate other transgenic animal species."

The technique exploits features of HIV-like viruses known as lentiviruses, which can infect both dividing and non-dividing cells, as gene delivery vehicles. Unlike HIV, the lentivirus is rendered incapable of causing AIDS. The lentivirus carries new genes into the cell's existing genome. In this case, newly fertilized mouse eggs were engineered to carry the green fluorescent protein (GFP) derived from jellyfish.

Baltimore and his team developed two ways of introducing the lentivirus into cells: microinjection of virus under the layer that protects recently fertilized eggs, or incubation of denuded fertilized eggs in a concentrated solution of the virus. The latter method is easier, although less efficient.

The transgenic mice, once they are born, carry a protein marker in all body tissues that make them glow green under a fluorescent light. This trait is genetic because the trait is a permanent feature of the animal's genome, and thus is carried throughout life and is inheritable by offspring. The term "transgene" refers to the fact that the new gene has been transferred.

Transgenics holds promise to biotechnology and experimental biology because the techniques can be used to "engineer" new, desirable traits in plants and animals, provided the trait can be identified and localized in another organism's genome. A transgenic cow, for example, might be engineered to produce milk containing therapeutic human proteins, or a transgenic chicken might produce eggs low in cholesterol.

In experimental biology, transgenics are valuable laboratory animals for fundamental research. A cat with an altered visual system, for example, might better accommodate fundamental studies of the nature of vision.

According to Baltimore, the procedure works on rats as well as mice. This is a huge advantage to experimentalists because of the number of laboratory applications in which rats are preferable, he says.

Writer: 
RT

Astronomers detect atmosphereof planet outside solar system

Astronomers using NASA's Hubble Space Telescope have made the first direct detection of the atmosphere of a planet orbiting a star outside our solar system and have obtained the first information about its chemical composition. Their unique observations demonstrate that it is possible with Hubble and other telescopes to measure the chemical makeup of extrasolar planet atmospheres and to potentially search for chemical markers of life beyond Earth.

The planet orbits a yellow, sunlike star called HD 209458, a seventh-magnitude star (visible through an amateur telescope), which lies 150 light-years away in the autumn constellation Pegasus. Its atmospheric composition was probed when the planet passed in front of its parent star, allowing astronomers for the first time ever to see light from the star filtered through the planet's atmosphere.

Lead investigator David Charbonneau of the California Institute of Technology and the Harvard-Smithsonian Center for Astrophysics, Timothy Brown of the National Center for Atmospheric Research, and colleagues used Hubble's spectrometer (the Space Telescope Imaging Spectrograph) to detect the presence of sodium in the planet's atmosphere.

"This opens up an exciting new phase of extrasolar planet exploration, where we can begin to compare and contrast the atmospheres of planets around other stars," says Charbonneau. The astronomers actually saw less sodium than predicted for the Jupiter-class planet, leading to one interpretation that high-altitude clouds in the alien atmosphere may have blocked some of the light. The findings will be published in the Astrophysical Journal.

The Hubble observation was not tuned to look for gases expected in a life-sustaining atmosphere (which is improbable for a planet as hot as the one observed). Nevertheless, this unique observing technique opens a new phase in the exploration of extrasolar planets, say astronomers.

Such observations could potentially provide the first direct evidence for life beyond Earth by measuring unusual abundances of atmospheric gases caused by the presence of living organisms. The planet orbiting HD 209458 was discovered in 1999 through its slight gravitational tug on the star. Based on that observation the planet is estimated to be 70 percent the mass of the giant planet Jupiter (or 220 times more massive than Earth).

Subsequently, astronomers discovered the planet passes in front of the star, causing the star to dim very slightly for the transit's duration. This means the planet's orbit happens to be tilted edge-on to our line-of-sight from Earth. It is the only example of a transit among all the extrasolar planets discovered to date.

The planet is an ideal target for repeat observations because it transits the star every 3.5 days—which is the extremely short amount of time it takes the planet to whirl around the star at a distance of merely 4 million miles from the star's searing surface. This precariously close proximity to the star heats the planet's atmosphere to a torrid 2,000 degrees Fahrenheit (1,100 degrees Celsius).

Previous transit observations by Hubble and ground-based telescopes confirmed that the planet is primarily gaseous, rather than liquid or solid, because it has a density less than that of water. (Earth, a rocky rather than a gaseous planet, has an average density five times that of water.) These earlier observations thus established that the planet is a gas giant, like Jupiter and Saturn.

The planet's swift orbit allowed for observations of four separate transits to be made by Hubble in search of direct evidence of an atmosphere. During each transit a small fraction of the star's light passed through the planet's atmosphere on its way to Earth. When the color of the light was analyzed by a spectrograph, the telltale "fingerprint" of sodium was detected. Though the star also has sodium in its outer layers, the STIS precisely measured the added influence of sodium in the planet's atmosphere.

The team—including Robert Noyes of the Harvard-Smithsonian Center for Astrophysics and Ronald Gilliland of the Space Telescope Science Institute in Baltimore, Maryland—next plans to look at HD 209458 again with Hubble, in other colors of the star's spectrum to see which are filtered by the planet's atmosphere. They hope eventually to detect methane, water vapor, potassium, and other chemicals in the planet's atmosphere. Once other transiting giants are found in the next few years, the team expects to characterize chemical differences among the atmospheres of these planets.

These anticipated findings would ultimately help astronomers better understand a bizarre class of extrasolar planets discovered in recent years that are dubbed "hot Jupiters." They are the size of Jupiter but orbit closer to their stars than the tiny innermost planet Mercury in our solar system. While Mercury is a scorched rock, these planets have enough gravity to hold onto their atmospheres, though some are hot enough to melt copper.

Conventional theory is that these giant planets could not have been born so close to their stars. Gravitational interactions with other planetary bodies or gravitational forces in a circumstellar disk must have carried these giants via spiraling orbits precariously close to their stars from their birthplace farther out, where they bulked up on gas and dust as they formed.

Proposed moderate-sized U.S. and European space telescopes could allow for the detection of many much smaller Earth-like planets by transit techniques within the next decade. The chances for detection will be more challenging, since detecting a planet orbiting at an Earth-like distance will mean a much tighter orbital alignment is needed for a transit. And the transits would be much less frequent for planets with an orbital period of a year, rather than days. Eventually, study of the atmosphere of these Earth-like planets will require meticulous measurements by future larger space telescopes.

The Space Telescope Science Institute (STScI) is operated by the Association of Universities for Research in Astronomy (AURA), for NASA, under contract with the Goddard Space Flight Center, Greenbelt, Maryland. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency (ESA). The National Center for Atmospheric Research's primary sponsor is the National Science Foundation.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Sound alters the activity of visual areas in the human brain,Caltech research reveals

Scientists at the California Institute of Technology have discovered that hearing can significantly change visual perception, and that the influence of hearing on visual perception occurs at an early perceptual level rather than at a higher cognitive level.

Ladan Shams, a Caltech postdoctoral researcher, and Shinsuke Shimojo, a professor of computation and neural systems at Caltech report that visual signals are influenced significantly by sounds at early cortical levels that have been believed to be "vision specific."

The team's initial behavioral finding was that when an observer is shown one flash of light accompanied by two beeps, the visual system is tricked so that the observer sees two flashes instead of one. In the new study, 13 healthy volunteers were asked to observe the stimuli on a computer screen and judge the number of flashes they saw on the screen.

While the participants performed the task, their brains' electric potentials were recorded from three electrodes positioned in the back of the scalp, where the early visual areas are located.

The researchers found that when the participants perceived the illusion—in other words, when sound changed the visual perception—the activity in the visual areas was modified. Furthermore, the change in activity was similar to that induced by an additional physical flash.

This suggests that the second flash, which is nothing but an illusion and is not due to a visual stimulus but rather caused by sound, invokes activity in the visual areas very similar to that which would be caused by a physical second flash. In short, sound induces a similar effect in this area of the brain to a visual stimulus.

The goal of this study was to get an understanding of how this alteration of vision by sound occurs in the brain. More specifically, the researchers asked whether the change in visual perception is caused by a change in the higher-level areas of the brain that are known to combine information from multiple senses, or whether it is a change that directly affects the activity of the areas that are believed to be exclusively involved in processing visual information.

The main result of the study was that the early visual cortical responses were modulated by accompanied sounds under conditions where the observers experienced the double-flash illusion. This suggests that the activity of the "visual" areas in the brain is affected by sound.

These findings challenge two traditional perspectives on how the brain processes sensory information. The first assumption is that humans are visual animals; vision is the dominant modality and hence not malleable by information from other modalities. Another general belief is that the information from different modalities is processed in the brain in parallel and separate paths.

The findings show that the visual information is affected by the auditory signals while being processed in the "modality-specific" visual pathway. These findings, together with earlier results in other modalities, suggest a paradigm of sensory processing that is more intertwined than segregated.

"The findings have an important implication for the new studies of human perception," says Shimojo. The overwhelming majority of studies in the field of perception have concerned themselves with one modality alone (based on the assumption of modality segregation).

This study, together with other studies indicating vigorous early plasticity and interactions across sensory modalities, is also very encouraging for applications such as sensory aids for children suffering from blindness or dyslexia, for educational applications, for man-made interfaces, and for media and information technology.

A report on this study will appear in the December issue of the journal NeuroReport. Ladan Shams is lead author of the paper.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

First gamma burst detected by new NASA satellite is pinpointed at Palomar Observatory

Astrophysicists have combined the Palomar Mountain 200-inch Hale Telescope with the abilities of a new NASA satellite to detect and characterize a gamma-ray burst lying at a distance of only 5 billion light-years from Earth. This is the closest gamma-ray burst ever studied by optical telescopes.

The origin of cosmic gamma-ray bursts, spectacular flashes of high-energy radiation followed by slowly decaying optical and radio emission that can be seen from great distances, is still a puzzle to astronomers. Many scientists believe that the bursts result from explosions that signal the birth of black holes; however, all agree that more data are needed before we can really know black holes' origins and nature.

NASA's new High-Energy Transient Explorer (HETE) detected a gamma-ray burst on September 21. Data indicated that the event was located in the Lacerta constellation, and refined information from the Interplanetary Network (IPN), a series of satellites with gamma-ray detectors scattered about the solar system, reduced the region astronomers needed to search to find the fading embers of the explosion. Scientists at the California Institute of Technology's Palomar Observatory, using the historic Hale 200-inch reflector, were able to locate the visual afterglow the following day. This was the first burst from the HETE satellite to be pinpointed with an accuracy sufficient to study the remains.

On October 17 the Caltech team used the Hale Telescope to obtain a redshift for the burst. This allowed a distance to be inferred, implying that the burst happened some 5 billion years ago. This makes the burst one of the closest ever found, and thus easier to study in detail. Also on October 17 the team members, led by Dale Frail from the National Radio Astronomy Observatory, detected a twinkling radio counterpart of the burst using the Very Large Array in New Mexico.

According to Shri Kulkarni, who is the MacArthur Professor of Astronomy and Planetary Science at Caltech, the team was able to find the rare optical afterglow because of the quick detection and localization abilities of the HETE satellite and the rapid follow-up with the Palomar Mountain Hale Telescope.

HETE, the first satellite dedicated to the study of gamma-ray bursts, is on an extended mission until 2004. Launched on October 9, 2000, HETE was built by MIT as a mission of opportunity under the NASA Explorer Program. The HETE program is a collaboration between MIT; NASA; Los Alamos National Laboratory, New Mexico; France's Centre National d'Etudes Spatiales, Centre d'Etude Spatiale des Rayonnements, and Ecole Nationale Superieure del'Aeronautique et de l'Espace; and Japan's Institute of Physical and Chemical Research. The science team includes members from the University of California (Berkeley and Santa Cruz) and the University of Chicago, as well as from Brazil, India, and Italy.

"I'm very excited. I could not sleep for two nights after making the discovery," said Paul Price, the Caltech graduate student who first identified the optical afterglow from Palomar.

"With this first confirmed observation of a gamma-ray burst and its afterglow, we've really turned the corner," said George Ricker of the Massachusetts Institute of Technology, principal investigator for HETE. "As HETE locates more of these bursts and reports them quickly, we will begin to understand what causes them.

"The unique power of HETE is that it not only detects a large sample of these bursts, but it also relays the accurate location of each burst in real time to ground-based optical and radio observatories," Ricker said.

Because the enigmatic bursts disappear so quickly, scientists can best study the events by way of their afterglow. HETE detects these bursts as gamma rays or high-energy X rays, and then instantly relays the coordinates to a network of ground-based and orbiting telescopes for follow-up searches for such afterglows.

Additional observations of this event, made with the Italian BeppoSAX satellite and the Ulysses space probe, were coordinated by HETE team member Kevin Hurley at the University of California. The combination of the localization by the Interplanetary Network with the original HETE localization provided the refined information needed by ground-based observers to point their optical telescopes.

The opportunity to see the afterglow in optical light provides crucial information about what is triggering these mysterious bursts, which scientists speculate to be the explosion of massive stars, the merging of neutron stars and black holes, or possibly both. Follow-up observations of GRB 010921 using the Hubble Space Telescope and the telescopes on the ground should move us a few steps closer to the answer of this cosmic puzzle.

The team that identified the counterpart to GRB010921 includes—in addition to Caltech Professors Shri Kulkarni, Fiona Harrison, and S. George Djorgovski—postdoctoral fellows and scholars Re'em Sari, Titus Galama, Daniel Reichart, Derek Fox, and Ashish Mahabal, graduate students Joshua Bloom, Paul Price, Edo Berger, and Sara Yost, Dale Frail from the National Radio Astronomy Observatory, and many other collaborators.

More information on HETE can be found at: http://space.mit.edu/HETE Palomar Observatory: http://www.astro.caltech.edu/palomar/ Caltech Media Relations: http://pr.caltech.edu/media/.

Writer: 
Robert Tindol
Writer: 

International team uses powerful cosmic lensto find galactic building block in early universe

Exploiting a phenomenon known as gravitational lensing, an international team of astrophysicists has detected a very small, faint stellar system in the process of its formation during the first half billion years or so of the universe's existence.

The discovery is being reported in the October 20 issue of the Astrophysical Journal. According to lead author Richard Ellis, a professor of astronomy at the California Institute of Technology, the faint object is an excellent candidate for the long sought after "building blocks" thought to be abundant at early times and which later assembled to make present-day galaxies.

The discovery was made possible by examining small areas of sky viewed through a massive intervening cluster of galaxies, Abell 2218, 2 billion light-years away. The cluster acts as a powerful gravitational lens, magnifying distant objects and allowing the scientists to probe how distant galaxies assembled at very early times.

Gravitational lensing, a dramatic feature of Einstein's theory of general relativity, means that a massive object in the foreground bends the light rays radiating from one in the background because mass curves space. As a result, an object behind a massive foreground galaxy cluster like Abell 2218 can look much brighter because the foreground object has bent additional photons toward Earth, in much the same way that glass lenses in binoculars will bend more photons toward the eyes.

In the case of the system detected by Ellis and coworkers, the effect makes the image at least 30 times brighter than would be the case if the Abell 2218 cluster were not in the foreground. Without this boost, neither the Keck 10-meter Telescopes nor the Hubble Space Telescope would have detected the object.

Ellis explains, "Without the benefit of the powerful cosmic lens, the intriguing source would not even have been detected in the Hubble Deep Fields, historic deep exposures taken in 1995 and 1998."

Using the 10-meter Keck Telescopes at Mauna Kea, the collaboration found a faint signal corresponding to a pair of feeble images later recognized in a deep Hubble Space Telescope picture.

Spectroscopic studies made possible with the superior light-gathering power of the Keck confirmed that the images arise via the magnification of a single source diagnosed to be extremely distant and in the process of formation.

"The system contains about a million or so stars at a distance of 13.4 billion light-years, assuming that the universe is 14 billion years old," claims Ellis. "While more distant galaxies and quasars have been detected with the Keck Telescopes, by virtue of the magnification afforded by the foreground cosmic lens, we are witnessing a source much smaller than a normal galaxy forming its first generation of stars." " Our work is a little like studying early American history," says team member Mike Santos, a Caltech graduate student in astronomy. "But instead of focusing on prominent individuals like George Washington, we want to know how everyday men and women lived.

"To really understand what was going on in the early universe, we need to learn about the typical, commonplace building blocks, which hold important clues to the later assembly of normal galaxies. Our study represents a beginning to that understanding."

The precise location of the pair of images in relation to the lensing cluster allowed the researchers to confirm the magnification. This work was the contribution of team member Jean-Paul Kneib of the Observatoire Midi-Pyrénées near Toulouse, France, an expert in the rapidly developing field of gravitational lensing.

The team concludes that the star system is remarkably young (by cosmic standards) and thus may represent the birth of a subcomponent of a galaxy or "building block." Such systems are expected to have been abundant in the early universe and to have later assembled to form mature large galaxies like our own Milky Way.

Santos explains, "The narrow distribution of intensity observed with the Keck demonstrates we are seeing hydrogen gas heated by newly formed stars. But, crucially, there is not yet convincing evidence for a well-established mixture of stars of different ages. This suggests we are seeing the source at a time close to its formation."

In their article, the researchers infer that the stars had been forming at a rate of one solar mass per year for not much longer than a million years. Such a structure could represent the birth of a globular cluster, stellar systems recognized today to be the oldest components of the Milky Way galaxy. The work represents part of an ongoing survey to determine the abundance of such distant star-forming sources as well as to fix the period in cosmic history when the bulk of these important objects formed.

Contact:Robert Tindol (626) 395-3631

Writer: 
RT

Caltech: Combating Future "Bandwidth Bottleneck"

PASADENA, Calif.— The next exciting generation of multimedia on the Internet will be computer images created by three-dimensional geometry. Soon, instead of viewing a simple picture on their computer monitor, a user will be able to view an object from any viewpoint, lighting, or surface. The result will be images that seem to come to life.

It's exciting technology, unless, that is, your personal computer suffers from "bandwidth bottleneck," the frustrating experience that results in the herky-jerky motion of today's so-called streaming video, or audio that breaks up and is hard to hear. Now a solution is in hand for this up coming problem, thanks to a California Institute of Technology professor who's been named a finalist in Discover Magazine's 2001 Innovation Awards.

A team of computer scientists led by Caltech's Peter Schröder, a professor of computer science and applied and computational mathematics, and Wim Sweldens of Lucent Technologies, was acknowledged for developing the most powerful technique to date for computer graphics that will make it practical to send such detailed 3-D data over the Internet. The Innovation Awards are presented annually by Discover, the science publication, and are intended to honor scientists whose groundbreaking work will change the way we live.

Bandwidth bottleneck, a phrase used by Schröder, is caused by too much data being sent through a slow Internet connection to a slow computer. Schröder's expertise is in the mathematical foundations of computer graphics. He and his colleagues developed a compression algorithm—a repetitive computational procedure—that will make it practical to send 3-D images over the Internet, and to manipulate them on personal computers and handhelds.

Without this algorithm, the coming 3-D images would just further bog down the average PC with gobs of data. That's because such 3-D objects describe the actual geometry of an object, such as its depth or height, in detail, with all its measurements. The object could be a human head, or a part for an automobile in an on-line catalog, or a cartoon character. Such digital geometric data is typically acquired by 3-D laser scanning and represents objects using dense meshes with up to millions or even billions of triangles.

The compression challenge is to use the fewest possible bits (the basic unit of information in a digital computing system) to store and transmit these huge and complex sets of data. Efficient geometry compression—delivering the same or higher quality with fewer bits—could unlock the potential of high-end 3-D on consumer systems. The researchers, led by Schröder and Sweldens, report that their technique for geometry compression is up to 12 times better than standard compression methods.

Why this will be exciting for the everyday PC user is the power of 3-D images. "Imagine being able to download a 3-D model of Michelangelo's David to your home computer," says Schröder. "Not only would you see an individual picture, but you could examine in detail the chisel marks on David's cheek, or see what the statue looks like if you stood on a tall ladder."

Today, he says, such riches are reserved for high-end computer users with very high bandwidth Internet connections. "Sometime soon, though," says Schröder, "it can be available to any schoolchild the world over."

Writer: 
MW
Exclude from News Hub: 
No

New research shows that brain is involvedin visual afterimages

If you stare at a bright red disk for a time and then glance away, you'll soon see a green disk of the same size appear and then disappear. The perceived disk is known as an afterimage, and has long been thought to be an effect of the "bleaching" of photochemical pigments or adaptation of neurons in the retina and merely a part of the ocular machinery that makes vision possible.

But a novel new experimental procedure by psychophysicists shows that the brain and its adaptive change are involved in the formation of afterimages.

Reporting in the August 31 issue of the journal Science, a joint team from the California Institute of Technology and NTT Communication Science Laboratories, led by Caltech professor Shinsuke Shimojo, demonstrates that adaptation to a specific visual pattern which induces perception of "color filling-in" later leads to a negative afterimage of the filled-in surface. The research further demonstrates that this global type of afterimage requires adaptation not at the retinal, but rather at the cortical, level of visual neural representation.

The Shimojo team employed a specific type of image (see image A below) in which a red semi-transparent square is perceived on top of the four white disks. Only the wedge parts of the disks are colored, and there is no local stimulus or indication of redness in the central portion of the display, yet the color filling-in mechanism operates to give an impression of filled-in red surface.

If an observer were staring only at the red square for at least 30 seconds, then he or she would see a reverse-color green square for a few seconds after refixating on a blank screen (as in the image at the top of C).

However, an observer who fixates on the image at left (in A) and then refixates on a blank screen will usually see four black disks such as the ones at the bottom of C, followed by a global afterimage in which a green square appears to be solid.

The fact that no light from the center of the original square was red during adaptation demonstrates that the effect was not merely caused by a leaking-over or fuzziness of neural adaptation, because the four white disks are at first clearly distinct as black afterimages. Thus, the global afterimage is distinct from a conventional afterimage.

One possibility is that local afterimages of the disks and wedges—but only these—are induced first, and then the color filling-in occurs to give an impression of the global square, just as in the case of red filling-in during adaptation. The researchers considered this element-adaptation hypothesis, but eventually turned it down.

The other hypothesis is that, since neural circuits employing cortical neurons are known to cause the filling-in of the center of the red square, then perhaps it is this cortical circuit that undergoes adaptation to directly create the global negative green afterimage. This is called the surface-adaptation hypothesis, which was eventually supported by their results.

The researchers came up with experiments to provide three lines of evidence to reject the first and support the second hypothesis. First, the local and the global afterimages were visible with different timing, and tended to be exclusive of each other. This argued against the first hypothesis that the local afterimages are necessary to see the global afterimage.

Second, when the strength of color filling-in during adaptation was manipulated by changing the timing of the presentation of disks and colored wedges, the strength of the global afterimage was positively correlated with it, as predicted by the surface-adaptation hypothesis but not by the element adaptation hypothesis.

For the last piece of evidence, the researchers prepared a dynamic adapting stimulus designed specifically to minimize the local afterimages, yet to maximize the impression of color filling-in during adaptation. If the element-adaptation hypothesis is correct, then test subjects would not observe the global afterimage. If, on the other hand, the surface-adaptation hypothesis is correct, the observers would see a vivid global afterimage only. The result turned out to be the latter.

The study has no immediate applications, but furthers the understanding of perception and the human brain, says Shimojo, a professor of computation and neural systems at Caltech and lead author of the study.

"This has profound implications with regard to how brain activity is responsible for our conscious perception," he says.

According to Shimojo, the brain is the ultimate organ for humans to adapt to the environment, so it would make more sense if the brain, as well as the retina, can modify their activity—and perception as a result—due to experience and adaptation.

The other authors of the paper are Yukiyasu Kamitani, a Caltech graduate student in computation and neural systems, and Shin'ya Nishida of the NTT Communication Science Laboratories in Atsugi, Kanagawa, Japan.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Astronomers detect evidence of time when universe emerged from "Dark Ages"

Astronomers at the California Institute of Technology announced today the discovery of the long-sought "Cosmic Renaissance," the epoch when young galaxies and quasars in the early universe first broke out of the "Dark Ages" that followed the Big Bang.

"It is very exciting," said Caltech astronomy professor S. George Djorgovski, who led the team that made the discovery. "This was one of the key stages in the history of the universe."

According to a generally accepted picture of modern cosmology, the universe started with the Big Bang some 14 billion years ago, and was quickly filled with glowing plasma composed mainly of hydrogen and helium.

As the universe expanded and cooled over the next 300,000 years, the atomic nuclei and electrons combined to make atoms of neutral gas. The glow of this "recombination era" is now observed as the cosmic microwave background radiation, whose studies have led to the recent pathbreaking insights into the geometrical nature of the universe.

The universe then entered the Dark Ages, which lasted about half a billion years, until they were ended by the formation of the first galaxies and quasars. The light from these new objects turned the opaque gas filling the universe into a transparent state again, by splitting the atoms of hydrogen into free electrons and protons. This Cosmic Renaissance is also referred to by cosmologists as the "reionization era," and it signals the birth of the first galaxies in the early universe.

"It is as if the universe was filled by a dark, opaque fog up to that time," explains Sandra Castro, a postdoctoral scholar at Caltech and a member of the team. "Then the fires—the first galaxies—lit up and burned through the fog. They made both the light and the clarity."

The researchers saw the tell-tale signature of the cosmic reionization in the spectra of a very distant quasar, SDSS 1044-0125, discovered last year by the Sloan Digital Sky Survey (SDSS). Quasars are very luminous objects in the distant universe, believed to be powered by massive black holes.

The spectra of the quasar were obtained at the W. M. Keck Observatory's Keck II 10-meter telescope atop Mauna Kea, Hawaii. The spectra show extended dark regions, caused by opaque gas along the line of sight between Earth and the quasar. This effect was predicted in 1965 by James Gunn and Bruce Peterson, both then at Caltech. Gunn, now at Princeton University, is the leader of the Sloan Digital Sky Survey; Peterson is now at Mt. Stromlo and Siding Spring observatories, in Australia.

The process of converting the dark, opaque universe into a transparent, lit-up universe was not instantaneous: it may have lasted tens or even hundreds of millions of years, as the first bright galaxies and quasars were gradually appearing on the scene, the spheres of their illumination growing until they overlapped completely.

"Our data show the trailing end of the reionization era," says Daniel Stern, a staff scientist at the Jet Propulsion Laboratory and a member of the team. "There were opaque regions in the universe back then, interspersed with bubbles of light and transparent gas."

"This is exactly what modern theoretical models predict," Stern added. "But the very start of this process seems to be just outside the range of our data."

Indeed, the Sloan Digital Sky Survey team has recently discovered a couple of even more distant quasars, and has reported in the news media that they, too, see the signature of the reionization era in the spectra obtained at the Keck telescope.

"It is a wonderful confirmation of our result," says Djorgovski. "The SDSS deserves much credit for finding these quasars, which can now be used as probes of the distant universe—and for their independent discovery of the reionization era."

"It is a great example of a synergy of large digital sky surveys, which can discover interesting targets, and their follow-up studies with large telescopes such as the Keck," adds Ashish Mahabal, a postdoctoral scholar at Caltech and a member of the team. "This is the new way of doing observational astronomy: the quasars were found by SDSS, but the discovery of the reionization era was done with the Keck."

The Caltech team's results have been submitted for publication in the Astrophysical Journal Letters, and will appear this Tuesday on the public electronic archive, http://xxx.lanl.gov/list/astro-ph/new.

The W. M. Keck Observatory is a joint venture of Caltech, the University of California, and NASA, and is made possible by a generous gift from the W. M. Keck Foundation.

Writer: 
Robert Tindol
Writer: 

Survival of the Fittest . . . Or the Flattest?

Darwinian dogma states that in the marathon race of evolution, the genotype that replicates the fastest, wins. But now scientists at the California Institute of Technology say that's true, but when you factor in another basic process of evolution, that of mutations, it's often the tortoise that defeats the hare.

It turns out that mutations, the random changes that can take place in a gene, are the wild cards in the great race. The researchers found that at high mutation rates, genotypes with a slower replication rate can displace faster replicators if the former has a higher "robustness"—or fitness—against mutations; that is, if a mutation is, on average, less harmful to the slower replicator than to the faster one. The research, to appear in the July 19th issue of the journal Nature, was conducted by several investigators, including Claus Wilke, a postdoctoral scholar, Chris Adami, who holds joint appointments at Caltech and the Jet Propulsion Lab, Jia Lan Wang, an undergraduate student, Charles Ofria, a former Caltech graduate student now at Michigan State University; and Richard Lenski, a professor at Michigan State.

In a takeoff of a common Darwinian phrase, they coin their work "survival of the flattest" rather than the survival of the fittest. The idea is this: If a group of similar genotypes with a faster replication rate occupies a "high and narrow peak" in the landscape of evolutionary fitness, while a different group of genotypes that replicates more slowly occupies a lower and flatter, or broader, peak, then, when mutation rates are high, the broadness of the lower peak can offset the height of the higher peak. That means the slower replicator wins. " In a way, organisms can trade replication speed for robustness against mutations and vice versa," says Wilke. "Ultimately, the organisms with the most advantageous combination of both will win."

Discerning such evolutionary nuances, though, is no easy task. To test an evolutionary theory requires generations and generations of an organism to pass. To make matters worse, the simplest living system, namely that which has been a precursor to all living systems on Earth, has been replaced by much more complicated systems over the last four billion years.

Wilke and his collaborators found the solution in the growing power of computers by constructing, via a software program, an artificial living system that behaves in remarkably lifelike ways. Such digital creatures evolve in the same way biological life forms do; they live in, and adapt to, a virtual world created for them inside a computer. Doing so offers an opportunity to test generalizations about living systems that may extend beyond the organic life that biologists usually study. Though this research did not involve actual living organisms, one of the authors, Richard Lenski, is a leading expert on the evolution of Escherichia Coli bacteria. Lenski believes that digital organisms are sufficiently realistic to yield biological insights, and he continues his research on both E. coli and digital organisms.

In their digital world, the organisms are self-replicating computer programs that compete with one another for CPU (central processing units) cycles, which are their limiting resource. Digital organisms have genomes in the form of a series of instructions, and phenotypes that are obtained by execution of their genomic program. The creatures physically inhabit a reserved space in the computer's memory—an "artificial Petri dish"—and they must copy their own genomes. Moreover, their evolution does not proceed toward a target specified in advance, but rather proceeds in an open-ended manner to produce phenotypes that are more successful in a particular environment.

Digital creatures lend themselves to evolutionary experiments because their environment can be readily manipulated to examine the importance of various selective pressures. In this study, though, the only environmental factor varied was the mutation rate. Whereas in nature, mutations are random changes that can take place in DNA, a digital organism's mutations occur in the random changes of its particular computer program. A command may be switched, for example, or a sequence of instructions copied twice.

For this study, the scientists derived 40 pairs of digital organisms that were derived from 40 different ancestors in identical selective environments. The only difference was that one of each pair was subjected to a four-fold higher mutation rate. In 12 cases out of the 40, the dominant genotype that evolved at the lower mutation rate replicated at a pace that was 1.5-fold faster than its counterpart at the higher mutation rate.

Next, the scientists allowed each of these 12 disparate pairs to compete across a range of mutation rates. In each case, as the mutation rate was increased, the outcome of competition switched to favor the genotype that had the lower replication rate. The researchers believe that these slower genotypes, although they occupied a lower fitness peak and were located in flatter regions of the fitness surface, were, as a result, more robust with respect to mutations.

The digital organisms have the advantage that many generations can be studied in a brief period of time. But the researchers believe a colony of asexual bacteria, subjected to the same stresses as the digital organisms, would probably face similar consequences.

The concept of "survival of the flattest" seems to imply, the authors say, that, at least for populations subject to a high mutation rate, selection acts upon a group of mutants rather than the individual. Thus, under such circumstances, genotypes that unselfishly produce mutant genotypes of high fitness are selected for, and supported in turn, by other mutants in that group. The study therefore reveals that "selfish genes," while being the successful strategy at low mutation rates, may be outcompeted by unselfish ones when the mutation rate is high.

Up to 6 million votes lost in 2000 presidential election, Voting Technology Project reveals

Though over 100 million Americans went to the polls on election day 2000, as many as 6 million might just have well have spent the day fishing. Researchers at Caltech and MIT call these "lost votes" and think the number of uncounted votes could easily be cut by more than half in the 2004 election with just three simple reforms.

"This study shows that the voting problem is much worse than we expected," said Caltech president David Baltimore, who initiated the nonpartisan study after the November election debacle.

"It is remarkable that we in America put up with a system where as many as six out of every hundred voters are unable to get their vote counted. Twenty-first-century technology should be able to do much better than this," Baltimore said.

According to the comprehensive Caltech-MIT study, faulty and outdated voting technology together with registration problems were largely to blame for many of the 4-to-6 million votes lost during the 2000 election.

With respect to the votes that simply weren't counted, the researchers found that punch-card methods and some direct recording electronic (DRE) voting machines were especially prone to error. Lever machines, optically scanned, and hand-counted paper ballots were somewhat less likely to result in spoiled or "residual" votes. Optical scanning, moreover, was better than lever machines.

As for voter registration problems, lost votes resulted primarily from inadequate registration data available at the polling places, and the widespread absence of provisional ballot methods to allow people to vote when ambiguities could not be resolved at the voting precinct.

 

The three most immediate ways to reduce the number of residual votes would be to:

· replace punch cards, lever machines, and some underperforming electronic machines with optical scanning systems;

· make countywide or even statewide voter registration data available at polling places;

· make provisional ballots available.

The first method, it is estimated, would save up to 1.5 million votes in a presidential election, while the second and third would combine to rescue as many as 2 million votes.

"We could bring about these reforms by spending around $3 per registered voter, at a total cost of about $400 million," says Tom Palfrey, a professor of economics and political science who headed the Caltech effort. "We think the price of these reforms is a small price to pay for insurance against a reprise of November 2000."

Approximately half the cost would go toward equipment upgrades, while the remainder would be used to implement improvements at the precinct level, in order to resolve registration problems on the spot. The $400 million would be a 40 percent increase over the money currently spent annually on election administration in the United States.

In addition to these quick fixes, the report identifies five long-run recommendations.

· First, institute a program of federal matching grants for equipment and registration system upgrades, and for polling-place improvement.

· Second, create an information clearinghouse and data-bank for election equipment and system performance, precinct-level election reporting, recounts, and election finance and administration.

· Third, develop a research grant program to field-test new equipment, develop better ballot designs, and analyze data on election system performance.

· Fourth, set more stringent and more uniform standards on performance and testing.

· Fifth, create an election administration agency, independent of the Federal Election Commission. The agency would be an expanded version of the current Office of Election Administration, and would oversee the grants program, serve as an information clearinghouse and databank, set standards for certification and recertification of equipment, and administer research grants.

The report also proposes a new modular voting architecture that could serve as a model for future voting technology. The Caltech-MIT team concludes that this modular architecture offers greater opportunity for innovation in ballot design and security.

Despite the fact that there is strong pressure to develop Internet voting, the team recommends a go-slow approach in that direction. The prospect of fraud and coercion, as well as hacking and service disruption, led the team to recommend a cautious approach to Internet voting. Also, many Americans are still unfamiliar with the technology.

"The Voting Technology Project is part of a larger effort currently underway—involving many dedicated election officials, researchers, and policy makers—to restore confidence in our election system," commented Steve Ansolabehere, a professor of political science who headed up the MIT team. "We are hopeful that the report will become a valuable resource, and that it will help to bring about real change in the near future."

Baltimore and MIT president Charles Vest announced the study on December 15, two days after the outcome of the presidential election was finally resolved. Funded by a $250,000 grant from the Carnegie Corporation, the study was intended to "minimize the possibility of confusion about how to vote, and offer clear verification of what vote is to be recorded," and "decrease to near zero the probability of miscounting votes."

The report is publicly available on the Caltech-MIT Voting Technology Project Website:

http://vote.caltech.edu

Writer: 
Robert Tindol
Writer: 

Pages

Subscribe to RSS - research_news