Most distant object in solar system discovered; could be part of never-before-seen Oort cloud

PASADENA, Calif.--A planetoid more than eight billion miles from Earth has been discovered by researchers led by a scientist at the California Institute of Technology. The new planetoid is more than three times the distance of Pluto, making it by far the most distant body known to orbit the sun.

The planetoid is well beyond the recently discovered Kuiper belt and is likely the first detection of the long-hypothesized Oort cloud. With a size approximately three-quarters that of Pluto, it is very likely the largest object found in the solar system since the discovery of Pluto in 1930.

At this extreme distance from the sun, very little sunlight reaches the planetoid and the temperature never rises above a frigid 400 degrees below zero Farenheit, making it the coldest known location in the solar system. According to Mike Brown, Caltech associate professor of planetary astronomy and leader of the research team, "the sun appears so small from that distance that you could completely block it out with the head of a pin."

As cold as it is now, the planetoid is usually even colder. It approaches the sun this closely only briefly during the 10,500 years it takes to revolve around the sun. At its most distant, it is 84 billion miles from the sun (900 times Earth's distance from the sun), and the temperature plummets to just 20 degrees above absolute zero.

The discoverers---Brown and his colleagues Chad Trujillo of the Gemini Observatory and David Rabinowitz of Yale University--have proposed that the frigid planetoid be named "Sedna," after the Inuit goddess who created the sea creatures of the Arctic. Sedna is thought to live in an icy cave at the bottom of the ocean--an appropriate spot for the namesake of the coldest body known in the solar system.

The researchers found the planetoid on the night of November 14, 2003, using the 48-inch Samuel Oschin Telescope at Caltech's Palomar Observatory east of San Diego. Within days, the new planetoid was being observed on telescopes in Chile, Spain, Arizona, and Hawaii; and soon after, NASA's new Spitzer Space Telescope was trained on the distant object.

The Spitzer images indicate that the planetoid is no more than 1,700 kilometers in diameter, making it smaller than Pluto. But Brown, using a combination of all of the data, estimates that the size is likely about halfway between that of Pluto and that of Quaoar, the planetoid discovered by the same team in 2002 that was previously the largest known body beyond Pluto.

The extremely elliptical orbit of Sedna is unlike anything previously seen by astronomers, but it resembles in key ways the orbits of objects in a cloud surrounding the sun predicted 54 years ago by Dutch astronomer Jan Oort to explain the existence of certain comets. This hypothetical "Oort cloud" extends halfway to the nearest star and is the repository of small icy bodies that occasionally get pulled in toward the sun and become the comets seen from Earth.

However, Sedna is much closer than expected for the Oort cloud. The Oort cloud has been predicted to begin at a distance 10 times greater even than that of Sedna. Brown believes that this "inner Oort cloud" where Sedna resides was formed by the gravitational pull of a rogue star that came close to the sun early in the history of the solar system. Brown explains that "the star would have been close enough to be brighter than the full moon and it would have been visible in the daytime sky for 20,000 years." Worse, it would have dislodged comets further out in the Oort cloud, leading to an intense comet shower, which would have wiped out any life on Earth that existed at the time.

There is still more to be learned about this newest known member of the solar system. Rabinowitz says that he has indirect evidence that there may be a moon following the planetoid on its distant travels--a possibility that is best checked with the Hubble Space Telescope--and he notes that Sedna is redder than anything known in the solar system with the exception of Mars, but no one can say why. Trujillo admits, "We still don't understand what is on the surface of this body. It is nothing like what we would have predicted or what we can currently explain."

But the astronomers are not yet worried. They can continue their studies as Sedna gets closer and brighter for the next 72 years before it begins its 10,500-year trip out to the far reaches of the solar system and back again. Brown notes, "The last time Sedna was this close to the sun, Earth was just coming out of the last the last ice age; the next time it comes back, the world might again be a completely different place."

Writer: 
Robert Tindol
Writer: 

Researchers discover fundamental scaling rule that differentiates primate and carnivore brains

PASADENA, Calif.--Everybody from the Tarzan fan to the evolutionary biologist knows that our human brain is more like a chimpanzee's than a dog's. But is our brain also more like a tiny lemur's than a lion's?

In one previously unsuspected way, the answer is yes, according to neuroscientists at the California Institute of Technology. In the current issue of the Proceedings of the National Academy of Sciences (PNAS), graduate student Eliot Bush and his professor, John Allman, report their discovery of a basic difference between the brains of all primates, from lemurs to humans, and all the flesh-eating carnivores, such as lions and tigers and bears.

The difference lies in the way the percentage of frontal cortex mass increases as the species gets larger. The frontal cortex is the portion of brain just behind the forehead that has long been associated with reasoning and other "executive" functions. In carnivores, the frontal cortex becomes proportionately larger as the entire cortex of the individual species increases in size--in other words, a lion that has a cortex twice the size of another carnivore's also has a frontal cortex twice the size.

By contrast, primates like humans and apes tend to have a frontal cortex that gets disproportionately larger as the overall cortex increases in size. This phenomenon is known as "hyperscaling," according to Bush, the lead author of the journal article.

What this says about the human relationship to the tiny lemurs of Madagascar is that the two species likely share a developmental or structural quirk, along with all the other primates, that is absent in all the carnivores, Bush explains. "The fact that humans have a large frontal cortex doesn't necessarily mean that they are special; relatively large frontal lobes have developed independently in aye-ayes among the lemurs and spider monkeys among the New World monkeys."

Bush and Allman reached their conclusions by taking the substantial histological data from the comparative brain collection at the University of Wisconsin at Madison. The collection, accumulated over many years by neuroscientist Wally Welker, comprises painstaking data taken from well over 100 species.

Bush and Allman's innovation was taking the University of Wisconsin data and running it through special software that allowed for volume estimations of the various structures of the brain in each species. Their results compared 43 mammals (including 25 primates and 15 carnivores), which allowed them to make very accurate estimations of the hyperscaling (or the lack thereof) in the frontal cortex.

The results show that in primates the ratio of frontal cortex to the rest of the cortex is about three times higher in a large primate than in a small one. Carnivores don't have this kind of systematic variation.

The hyperscaling mechanism is genetic, and was presumably present when the primates first evolved. "Furthermore, it is probably peculiar to primates," says Allman, who is Hixon Professor of Neurobiology at Caltech.

The next step will be to look at the developmental differences between the two orders of mammals by looking at gene expression differences. Much of this data is already available through the intense efforts in recent years to acquire the complete genomes of various species. The human genome, for example, is already complete, and the chimp genome is nearly so.

"We're interested in looking for genes involved in frontal cortex development. Changes in these may help explain how primates came to be different from other mammals," Bush says.

At present, the researchers have no idea what the difference is at the molecular level, but with further study they should be able to make this determination, Allman says. "It's doable."

The article is titled "The scaling of frontal cortex in primates and carnivores." For a copy of the article, contact Jill Locantore, PNAS communications specialist, at 202-334-1310, or e-mail her at jlocantore@nas.edu.

The PNAS Web site is at http://www.pnas.org.

For more information on Bush and Allman's research, go to the Web site http://allmanlab.caltech.edu/people/bush/3d-histol/3d-brain-recon.html

 

Writer: 
Robert Tindol
Writer: 

Planetary scientists find planetoid in Kuiper Belt; could be biggest yet discovered

PASADENA, Calif.—Planetary scientists at the California Institute of Technology and Yale University on Tuesday night discovered a new planetoid in the outer fringes of the solar system.

The planetoid, currently known only as 2004 DW, could be even larger than Quaoar--the current record holder in the area known as the Kuiper Belt--and is some 4.4 billion miles from Earth.

According to the discoverers, Caltech associate professor of planetary astronomy Mike Brown and his colleagues Chad Trujillo (now at the Gemini North observatory in Hawaii), and David Rabinowitz of Yale University, the planetoid was found as part of the same search program that discovered Quaoar in late 2002. The astronomers use the 48-inch Samuel Oschin Telescope at Palomar Observatory and the recently installed QUEST CCD camera built by a consortium including Yale and the University of Indiana, to systematically study different regions of the sky each night.

Unlike Quaoar, the new planetoid hasn't yet been pinpointed on old photographic plates or other images. Because its orbit is therefore not well understood yet, it cannot be given an official name.

"So far we only have a one-day orbit," said Brown, explaining that the data covers only a tiny fraction of the orbit the object follows in its more than 300-year trip around the sun. "From that we know only how far away it is and how its orbit is tilted relative to the planets."

The tilt that Brown has measured is an astonishingly large 20 degrees, larger even than that of Pluto, which has an orbital inclination of 17 degrees and is an anomaly among the otherwise planar planets.

The size of 2004 DW is not yet certain; Brown estimates a size of about 1,400 kilometers, based on a comparison of the planetoid's luminosity with that of Quaoar. Because the distance of the object can already be calculated, its luminosity should be a good indicator of its size relative to Quaoar, provided the two objects have the same albedo, or reflectivity.

Quaoar is known to have an albedo of about 10 percent, which is slightly higher than the reflectivity of our own moon. Thus, if the new object is similar, the 1,400-kilometer estimate should hold. If its albedo is lower, then it could actually be somewhat larger; or if higher, smaller.

According to Brown, scientists know little about the albedos of objects this large this far away, so the true size is quite uncertain. Researchers could best make size measurements with the Hubble Space Telescope or the newer Spitzer Space Telescope. The continued discovery of massive planetoids on the outer fringe of the solar system is further evidence that objects even farther and even larger are lurking out there. "It's now only a matter of time before something is going to be discovered out there that will change our entire view of the outer solar system," Brown says.

The team is working hard to uncover new information about the planetoid, which they will release as it becomes available, Brown adds. Other telescopes will also be used to better characterize the planetoid's features.

Further information is at the following Web site: http://www.gps.caltech.edu/~chad/2004dw

Writer: 
Robert Tindol
Writer: 

Researchers Using Hubble and Keck Telescopes Find Farthest Known Galaxy in the Universe

PASADENA, California--The farthest known object in the universe may have been discovered by a team of astrophysicists using the Keck and Hubble telescopes. The object, a galaxy behind the Abell 2218 cluster, may be so far from Earth that its light would have left when the universe was just 750 million years old.

The discovery demonstrates again that the technique known as gravitational lensing is a powerful tool for better understanding the origin of the universe. Via further applications of this remarkable technique, astrophysicists may be able to better understand the mystery of how the so-called "Dark Ages" came to an end.

According to California Institute of Technology astronomer Jean-Paul Kneib, who is the lead author reporting the discovery in a forthcoming article in the Astrophysical Journal, the galaxy is most likely the first detected close to a redshift of 7.0, meaning that it is rushing away from Earth at an extremely high speed due to the expansion of the universe. The distance is so great that the galaxy's ultraviolet light has been stretched to the point of being observed at infrared wavelengths.

The team first detected the new galaxy in a long exposure of the Abell 2218 cluster taken with the Hubble Space Telescope's Advanced Camera for Surveys. Analysis of a sequence of Hubble images indicate a redshift of at least 6.6, but additional work with the Keck Observatory's 10-meter telescopes suggests that the astronomers have found an object whose redshift is close to 7.0.

Redshift is a measure of the factor by which the wavelength of light is stretched by the expansion of the universe. The greater the shift, the more distant the object and the earlier it is being seen in cosmic history.

"As we were searching for distant galaxies magnified by Abell 2218, we detected a pair of strikingly similar images whose arrangement and color indicated a very distant object," said Kneib. "The existence of two images of the same object indicated that the phenomenon of gravitational lensing was at work."

The key to the new discovery is the effect the Abell 2218 cluster's gigantic mass has on light passing by it. As a consequence of Einstein's theory of relativity, light is bent and can be focused in a predictable way due to the warpage of space-time near massive objects. In this case the phenomenon actually magnifies and produces multiple images of the same source. The new source in Abell 2218 is magnified by a factor of 25.

The role of gravitational lensing as a useful phenomenon in cosmology was first pointed out by the Caltech astronomer Fritz Zwicky in 1937, who even suggested it could be used to discover distant galaxies that would otherwise be too faint to be seen.

"The galaxy we have discovered is extremely faint, and verifying its distance has been an extraordinarily challenging adventure," Kneib added. "Without the magnification of 25 afforded by the foreground cluster, this early object could simply not have been identified or studied in any detail with presently available telescopes. Indeed, even with aid of the cosmic lens, our study has only been possible by pushing our current observatories to the limits of their capabilities."

Using the unique combination of the high resolution of Hubble and the magnification of the cosmic lens, the researchers estimate that the galaxy is small--perhaps measuring only 2,000 light-years across—but forming stars at an extremely high rate.

An intriguing property of the new galaxy is the apparent lack of the typically bright hydrogen emission seen in many distant objects. Also, its intense ultraviolet signal is much stronger than that seen in later star-forming galaxies, suggesting that the galaxy may be composed primarily of massive stars.

"The unusual properties of this distant source are very tantalizing because, if verified by further study, they could represent those expected for young stellar systems that ended the dark ages," said Richard Ellis, Steele Family Professor of Astronomy, and a coauthor of the article.

The term "Dark Ages" was coined by the British astronomer Sir Martin Rees to signify the period in cosmic history when hydrogen atoms first formed but stars had not yet had the opportunity to condense and ignite. Nobody is quite clear how long this phase lasted, and the detailed study of the cosmic sources that brought this period to an end is a major goal of modern cosmology.

The team plans to continue the search for additional extremely distant galaxies by looking through other cosmic lenses in the sky.

"Estimating the abundance and characteristic properties of sources at early times is particularly important in understanding how the Dark Ages came to an end," said Mike Santos, a former Caltech graduate student involved in the discovery and now a postdoctoral researcher at the Institute of Astronomy in Cambridge, England. "We are eager to learn more by finding further examples, although it will no doubt be challenging."

The Caltech team reporting on the discovery consists of Kneib, Ellis, Santos, and Johan Richard. Kneib and Richard are also affiliated with the Observatoire Midi-Pyrenees of Toulouse, France. Santos is also at the Institute of Astronomy, in Cambridge.

The research was funded in part by NASA.

The W. M. Keck Observatory is managed by the California Association for Research in Astronomy, a scientific partnership between the California Institute of Technology, the University of California, and NASA. For more information, visit the observatory online at www.keckobservatory.org.

Writer: 
RT

Zombie Behaviors Are Part of Everyday Life, According to Neurobiologists

PASADENA, Ca.--When you're close to that woman you love this Valentine's Day, her fragrance may cause you to say to yourself, "Hmmm, Chanel No. 5," especially if you're the suave, sophisticated kind. Or if you're more of a missing link, you may even say to yourself, "Me want woman." In either case, you're exhibiting a zombie behavior, according to the two scientists who pioneered the scientific study of consciousness.

Longtime collaborators Christof Koch and Francis Crick (of DNA helix fame) think that "zombie agents"--that is, routine behaviors that we perform constantly without even thinking--are so much a central facet of human consciousness that they deserve serious scientific attention. In a new book titled The Quest for Consciousness: A Neurobiological Approach, Koch writes that interest in the subject of zombies has nothing to do with fiction, much less the supernatural. Crick, who for the last 13 years has collaborated with Koch on the study of consciousness, wrote the foreword of the book.

The existence of zombie agents highlights the fact that much of what goes on in our heads escapes awareness. Only a subset of brain activity gives rise to conscious sensations, to conscious feelings. "What is the difference between neuronal activity associated with consciousness and activity that bypasses the conscious mind?" asks Koch, a professor at the California Institute of Technology and head of the Computation and Neural Systems program.

Zombie agents include everything from keeping the body balanced, to unconsciously estimating the steepness of a hill we are about to climb, to driving a car, riding a bike, and performing other routine yet complex actions. We humans couldn't function without zombie agents, whose key advantage is that reaction times are kept to a minimum. For example, if a pencil is rolling off the table, we are quite able to grab it in midair, and we do so by executing an extremely complicated set of mental operations. And zombie agents might also be involved, by way of smell, in how we choose our sexual partners.

"Zombie agents control your eyes, hands, feet, and posture, and rapidly transduce sensory input into stereotypical motor output," writes Koch. "They might even trigger aggressive or sexual behavior when getting a whiff of the right stuff.

"All, however, bypass consciousness," Koch adds. "This is the zombie in you."

Zombie actions are but one of a number of topics that Koch and Crick have investigated since they started working together on the question of the brain basis of consciousness. Much of the book concerns perceptual experiments in normal people, patients, monkeys, and mice, that address the neuronal underpinnings of thoughts and actions.

As Crick points out in his foreword, consciousness is the major unsolved problem in biology. The Quest for Consciousness describes Koch and Crick's framework for coming to grips with the ancient mind-body problem. At the heart of their framework is discovering and characterizing the neuronal correlates of consciousness, the subtle, flickering patterns of brain activity that underlie each and every conscious experience.

The Quest for Consciousness: A Neurobiological Approach will be available in bookstores on February 27. For more information, see www.questforconsciousness.com. For review copies, contact Ben Roberts at Roberts & Company Publishers at (303) 221-3325, or send an e-mail to bwr@roberts-publishers.com.

Writer: 
Robert Tindol
Writer: 

Caltech Engineers Design a Revolutionary Radar Chip

PASADENA, Calif. -- Imagine driving down a twisty mountain road on a dark foggy night. Visibility is near-zero, yet you still can see clearly. Not through your windshield, but via an image on a screen in front of you.

Such a built-in radar system in our cars has long been in the domain of science fiction, as well as wishful thinking on the part of commuters. But such gadgets could become available in the very near future, thanks to the High Speed Integrated Circuits group at the California Institute of Technology.

The group is directed by Ali Hajimiri, an associate professor of electrical engineering. Hajimiri and his team have used revolutionary design techniques to build the world's first radar on a chip--specifically, they have implemented a novel antenna array system on a single, silicon chip.

Hajimiri notes, however, that calling it a "radar on a chip" is a bit misleading because it's not just radar. Having essentially redesigned a computer chip from the ground up, the technology is revolutionary enough to be used for a wide range of applications.

The chip can, for example, serve as a wireless, high-frequency communications link, providing a low-cost replacement for the optical fibers that are currently used for ultrafast communications. Hajimiri's chip runs at 24 GHz (24 billion cycles in one second), an extremely high speed, which makes it possible to transfer data wirelessly at speeds available only to the backbone of the Internet (the main network of connections that carry most of the traffic on the Internet).

Other possible uses:

* In cars, an array of these chips--one each in the front, the back, and each side--could provide a smart cruise control, one that wouldn't just keep the pedal to the metal, but would brake for a slowing vehicle ahead of you, avoid a car that's about to cut you off, or dodge an obstacle that suddenly appears in your path.

While there are other radar systems in development for cars, they consist of a large number of modules that use more exotic and expensive technologies than silicon. Hajimiri's chip could prove superior because of its fully integrated nature. That allows it to be manufactured at a substantially lower price, and makes the chip more robust in response to design variations and changes in the environment, such as heat and cold.

* The chip could serve as the brains inside a robot capable of vacuuming your house. While such appliances now exist, a vacuum using Hajimiri's chip as its brain would clean without constantly bumping into everything, have the sense to stay out of your way, and never suck up the family cat.

* A chip the size of a thumbnail could be placed on the roof of your house, replacing the bulky satellite dish or the cable connections for your DSL. Your picture could be sharper, and your downloads lightning fast.

* A collection of these chips could form a network of sensors that would allow the military to monitor a sensitive area, eliminating the need for constant human patrolling and monitoring.

In short, says Hajimiri, the technology will be useful for numerous applications, limited only by an entrepreneur's imagination.

Perhaps the best thing of all is that these chips are cheap to manufacture, thanks to the use of silicon as the base material. "Traditional radar costs a couple of million dollars," says Hajimiri. "It's big and bulky, and has thousands of components. This integration in silicon allows us to make it smaller, cheaper, and much more widespread."

Silicon is the ubiquitous element used in numerous electronic devices, including the microprocessor inside our personal computers. It is the second most abundant element in the earth's crust (after oxygen), and components made of silicon are cheap to make and are widely manufactured. "In large volumes, it will only cost a few dollars to manufacture each of these radar chips," he says.

"The key is that we can integrate the whole system into one chip that can contain the entire high-frequency analog and high-speed signal processing at a low cost," says Hajimiri. "It's less powerful than the conventional radar used for aviation, but, since we've put it on a single, inexpensive chip, we can have a large number of them, so they can be ubiquitous."

Hajimiri's radar chip, with both a transmitter and receiver (more accurately, a phased-array transceiver) works much like a conventional array of antennas. But unlike conventional radar, which involves the mechanical movement of hardware, this chip uses an electrical beam that can steer the signal in a given direction in space without any mechanical movement.

For communications systems, this ability to steer a beam will provide a clear signal and will clear up the airwaves. Cell phones, for example, radiate their signal omnidirectionally. That's what contributes to interference and clutter in the airwaves. "But with this technology you can focus the beams in the desired direction instead of radiating power all over the place and creating additional interference," says Hajimiri. "At the same time you're maintaining a much higher speed and quality of service."

Hajimiri's research interest is in designing integrated circuits for both wired and wireless high-speed communications systems. (An integrated circuit is a computer chip that serves multiple functions.) Most silicon chips have a single circuit or signal path that a signal will follow; Hajimiri's innovation lies in multiple, parallel circuits on a chip that operate in harmony, thus dramatically increasing speed and overcoming the speed limitations that are inherent with silicon.

Hajimiri says there's already a lot of buzz about his chip, and he hasn't even presented a peer-reviewed paper yet. He'll do so next week at the International Solid State Circuit Conference in San Francisco.

Note to editors: Color pictures of the tiny chip, juxtaposed against a penny, are available.

Media Contact: Mark Wheeler (626) 395-8733 wheel@caltech.edu

Visit the Caltech Media Relations website at http://pr.caltech.edu/media

Writer: 
MW
Writer: 

New Tool for Reading a Molecule's Blueprints

Just as astronomers image very large objects at great distances to understand what makes the universe tick, biologists and chemists need to image very small molecules to understand what makes living systems tick.

Now this quest will be enhanced by a $14,206,289 gift from the Gordon and Betty Moore Foundation to the California Institute of Technology, which will allow scientists at Caltech and Stanford University to collaborate on the building of a molecular observatory for structural molecular biology.

The observatory, to be built at Stanford, is a kind of ultrapowerful X-ray machine that will enable scientists from both institutions and around the world to "read" the blueprints of so-called macromolecules down at the level of atoms. Macromolecules, large molecules that include proteins and nucleic acids (DNA and RNA), carry out the fundamental cellular processes responsible for biological life. By understanding their makeup, scientists can glean how they interact with each other and their surroundings, and subsequently determine how they function. This knowledge, while of inherent importance to the study of biology, could also have significant practical applications, including the design of new drugs.

The foundation of this discovery process, says Doug Rees, a Caltech Professor of Chemistry and an investigator for the Howard Hughes Medical Institute, and one of the principal investigators of the project, is that "if you want to know how something works, you first need to know what it looks like.

"That's why we're excited about the molecular observatory," he says, "because it will allow us to push the boundary of structural biology to define the atomic-scale blueprints of macromolecules that are responsible for these critical cellular functions. This will include the technically demanding analyses of challenging biochemical targets, such as membrane proteins and large macromolecular assemblies, that can only be achieved using such a high-intensity, state of the art observatory."

The primary experimental approach for structural molecular biology is the use of X-ray beams, which can illuminate the three-dimensional structure of a molecule. It does this by blasting a beam of x-rays through a crystallized sample of the molecule, then analyzing the pattern of the scattered beam. According to Keith Hodgson, a Stanford professor and director of the facility where the new observatory will be build, "synchrotrons are powerful tools for such work, because they generate extremely intense, focused X-ray radiation many millions of times brighter than available from a normal x-ray tube." Synchrotron radiation is comprised of the visible and invisible forms of light produced by electrons circulating in a storage ring at nearly the speed of light. Part of the spectrum of synchrotron radiation lies in the x-ray region; the radiation is used to investigate various forms of matter at the molecular and atomic scales, using approaches in part pioneered by Linus Pauling during his time as a faculty member at Caltech in the fifties and sixties.

The new observatory, in technical terms called a beam line, will make use of the extremely bright x-rays produced by a newly installed advanced electron accelerator that is located at Stanford's Synchrotron Radiation Laboratory (SSRL) on the Stanford Linear Accelerator site (SLAC). The exceptional quality and brightness of the x-ray light from this new accelerator is perfectly suited to the study of complicated biological systems. The Foundation gift will be used by Caltech and the SSRL to design and construct a dedicated beam line at SSRL for structural molecular biology research. The x-ray source itself will be based upon a specialized device (called an in-vacuum undulator) that will produce the x-rays used to illuminate the crystalline samples. Specially designed instruments will allow fully automated sample manipulation via a robotic system and integrated software controls. Internet-based tools will allow researchers at Caltech or remote locations to control the experiments and analyze data in real time. An on-campus center to be built will facilitate access by faculty and students to the new beam line.

Knowing the molecular-scale blueprint of macromolecules will ultimately help answer such fundamental questions as "How are the chemical processes underlying life achieved and regulated in cells?" "How does a motor or pump work that is a millionth of a centimeter in size?" "How is information transmitted in living systems?"

"The construction of a high-intensity, state-of-the-art beam line at Stanford, along with an on-campus center here at Caltech to assist in these applications, will complement developments in cryo-electron microscopy that are underway on campus, also made possible through the support of the Gordon and Betty Moore Foundation," notes Caltech provost Steven Koonin.

The SSRL at Stanford is a national user facility operated by the U.S. Department of Energy's Office of Science. "I would like to thank the Gordon and Betty Moore Foundation for this generous gift [to Caltech]," said Dr. Raymond L. Orbach, director of the Office of Science, which oversees the SLAC and the SSRL. "This grant will advance the frontiers of biological science in very important and exciting ways. It also launches a dynamic collaboration between two great universities, Caltech and Stanford, at a Department of Energy research facility, thereby enhancing the investment of the federal government."

The Gordon and Betty Moore Foundation was established in November 2000, by Intel co-founder Gordon Moore and his wife Betty. The Foundation funds outcome-based projects that will measurably improve the quality of life by creating positive outcomes for future generations. Grantmaking is concentrated in initiatives that support the Foundation's principal areas of concern: environmental conservation, science, higher education, and the San Francisco Bay Area.

Writer: 
MW
Exclude from News Hub: 
No

Internet voting will require gradual, rational planning and experimentation, experts write

PASADENA, Calif.--Will Internet voting be a benefit to 21st-century democracy, or could it lead to additional election debacles like the one that occurred in 2000?

According to two experts on voting technology, the use of the Internet for voting can move forward in an orderly and effective way, but there should be experimentation and intelligent planning to ensure that it does so. Michael Alvarez, of the California Institute of Technology, and Thad E. Hall of the Century Foundation write in their new book that two upcoming experiments with Internet voting will provide unique data on how effective Internet voting can be in improving the election process.

On February 7, 2004, the Michigan Democratic Party will allow voters the option of voting over the Internet when casting their ballots in the party caucus. Then, for the presidential election on November 2, voters covered by the Uniformed and Overseas Civilian Absentee Voting Act who are registered in participating states will be able to vote over the Internet thanks to the Federal Voting Assistance Program's Secure Electronic Registration and Voting Experiment (SERVE).

In their book Point, Click, and Vote: The Future of Internet Voting (Washington, D.C.: Brookings Institution Press, 2004), Alvarez and Hall outline a step-by-step approach to moving forward with Internet voting. Their approach focuses primarily on the need for experimentation. Hall notes, "The transition to the widespread use of Internet voting cannot, and should not, occur overnight. There must be a deliberate strategy--involving experimentation and research--that moves along a rational path to Internet voting."

Alvarez and Hall base their conclusions on four key points:

= There should be a series of well-planned, controlled experiments testing the feasibility of Internet voting, targeting either special populations of voters--such as military personnel, individuals living abroad, or people with disabilities--or special types of elections, such as low-turnout local elections.

= Internet security issues must be studied more effectively so that voters can have confidence in the integrity of online voting.

= Legal and regulatory changes must be studied to see what is needed to make Internet voting a reality, especially in the United States. Election law in America varies at the state, county, and local levels, and it is likely that laws in many states will have to be changed to make Internet voting possible.

= The digital divide must be narrowed, so that all voters will have a more equal opportunity to vote over the Internet. Competitive pricing and market forces will help to lower barriers to becoming a part of the online community.

As Alvarez notes, "There were Internet voting trials conducted in 2000, but no meaningful data were collected, making it impossible to know whether they were a success. The 2004 Internet voting trials provide an opportunity to collect the data necessary to understand how Internet voting impacts the electoral process."

Alvarez is a professor of political science at Caltech and is co-director of the Caltech/MIT Voting Technology Project. He was a lead author of Voting: What Is, What Could Be, which was published by the project after the 2000 elections. He has published several books on voting behavior and written numerous articles on the topic as well. In 2001, he testified before Congress about election reform and has appeared as an expert witness in election-related litigation. Alvarez has a Ph.D. in political science from Duke University.

Hall is a program officer with the Century Foundation. He served on the professional staff of the National Commission on Federal Election Reform, where he wrote an analysis of the administration of the 2001 Los Angeles mayoral election, "LA Story: The 2001 Mayoral Election," that was published by the Century Foundation. He has written about voting and election administration for both academic and popular audiences and has testified before Congress on the topic. His forthcoming book examining the policy process in Congress, Authorizing Policy, will be published later this year by the Ohio State University Press. He has a Ph.D. in political science and public policy from the University of Georgia.

Writer: 
Robert Tindol
Writer: 

Astronomers measure distance to star celebrated in ancient literature and legend

PASADENA—The cluster of stars known as the Pleiades is one of the most recognizable objects in the night sky, and for millennia has been celebrated in literature and legend. Now, a group of astronomers has obtained a highly accurate distance to one of the stars of the Pleiades known since antiquity as Atlas. The new results will be useful in the longstanding effort to improve the cosmic distance scale, as well as to research the stellar life-cycle. In the January 22 issue of the journal Nature, astronomers from the California Institute of Technology and the Jet Propulsion Laboratory report the best-ever distance to the double-star Atlas. The star, along with "wife" Pleione and their daughters, the "seven sisters," are the principal stars of the Pleiades that are visual to the unaided eye, although there are actually thousands of stars in the cluster. Atlas, according to the team's decade of careful interferometric measurements, is somewhere between 434 and 446 light-years from Earth.\

The range of distance to the Pleiades cluster may seem somewhat imprecise, but in fact is accurate by astronomical standards. The traditional method of measuring distance is by noting the precise position of a star and then measuring its slight change in position when Earth itself has moved to the other side of the sun. This approach can also be used to find distance on Earth. If you carefully record the position of a tree an unknown distance away, move a specific distance to your side, and measure how far the tree has apparently "moved," it's possible to calculate the actual distance to the tree by using trigonometry.

However, this procedure gives only a rough estimate to the distance of even the nearest stars, due to the gigantic distances involved and the subtle changes in stellar position that must be measured. Further, the team's new measurement settles a controversy that arose when the European satellite Hipparcos provided a distance measurement to the Pleiades so much nearer the distance than assumed that the findings contradicted theoretical models of the life cycles of stars.

This contradiction was due to the physical laws of luminosity and its relationship to distance. A 100-watt light bulb one mile away looks exactly as bright as a 25-watt light bulb half a mile away. So to figure out the wattage of a distant light bulb, we have to know how far away it is. Similarly, to figure out the "wattage" (luminosity) of observed stars, we have to measure how far away they are. Theoretical models of the internal structure and nuclear reactions of stars of known mass also predict their luminosities. So the theory and measurements can be compared.

However, the Hipparcos data provided a distance lower than that assumed from the theoretical models, thereby suggesting either that the Hipparcos distance measurements themselves were off, or else that there was something wrong with the models of the life cycles of stars. The new results show that the Hipparcos data was in error, and that the models of stellar evolution are indeed sound. The new results come from careful observation of the orbit of Atlas and its companion--a binary relationship that wasn't conclusively demonstrated until 1974 and certainly was unknown to ancient watchers of the sky. Using data from the Mt. Wilson stellar interferometer (located next to the historic Mt. Wilson Observatory in the San Gabriel range) and the Palomar Testbed Interferometer at Caltech's Palomar Observatory in San Diego County, the team determined a precise orbit of the binary. Interferometry is an advanced technique that allows, among other things, for the "splitting" of two bodies that are so far away that they normally appear as a single blur, even in the biggest telescopes. Knowing the orbital period and combining it with orbital mechanics allowed the team to infer the distance between the two bodies, and with this information, to calculate the distance of the binary to Earth. "For many months I had a hard time believing our distance estimate was 10 percent larger than that published by the Hipparcos team," said the lead author, Xiao Pei Pan of JPL. "Finally, after intensive rechecking, I became confident of our result."

Coauthor Shrinivas Kulkarni, MacArthur Professor of Astronomy and Planetary Science at Caltech, said, "Our distance estimate shows that all is well in the heavens. Stellar models used by astronomers are vindicated by our value." "Interferometry is a young technique in astronomy and our result paves the way for wonderful returns from the Keck Interferometer and the anticipated Space Interferometry Mission that is expected to be launched in 2009," said coauthor Michael Shao of JPL. Shao is also the principal scientist for the Keck Interferometer and the Space Interferometry Mission. The Palomar Testbed Interferometer was designed and built by a team of researchers from JPL led by Shao and JPL engineer Mark Colavita. Funded by NASA, the interferometer is located at the Palomar Observatory near the historic 200-inch Hale Telescope. The device served as an engineering testbed for the interferometer that now links the 10-meter Keck Telescopes atop Mauna Kea in Hawaii.

 

Writer: 
Robert Tindol
Writer: 

Caltech geophysicists gain new insights on Earth's core–mantle boundary

Earth's core–mantle boundary is a place none of us will ever go, but researchers using a special high-velocity cannon have produced results showing there may be molten rock at this interface at about 1,800 miles. Further, this molten rock may have rested peacefully at the core-mantle boundary for eons.

In a presentation at the fall meeting of the American Geophysical Union (AGU) today, California Institute of Technology geophysics professor Tom Ahrens reports new measurements of the density and temperature of magnesium silicate--the stuff found in Earth's interior--when it is subjected to the conditions that exist at the planet's core-mantle boundary.

The Caltech team did their work in the institute's shock wave laboratory, where an 80-foot light-gas gun is specially prepared to fire one-ounce tantalum-faced plastic bullets at mineral samples at speeds up to 220 thousand feet per second--about a hundred times faster than a bullet fired from a conventional rifle. The 30-ton apparatus uses compressed hydrogen as a propellant, and the resulting impact replicates the 1.35 million atmospheres of pressure and the 8,500 degrees Fahrenheit temperature that exist at the core–mantle boundary.

The measurements were conducted using natural, transparent, semiprecious gem crystals of enstatite from Sri Lanka, as well as synthetic glass of the same composition. Upon compression, these materials transform to a 30–percent denser structure called perovskite, which also dominates Earth's lower mantle at depths from 415 miles to the core–mantle boundary.

According to Ahrens, the results "have significant implications for understanding the core–mantle boundary region in the Earth's interior, the interface between rocky mantle and metallic core." The report represents the work of Ahrens and assistant professor of geology and geochemistry Paul Asimow, along with graduate students Joseph Akins and Shengnian Luo.

The researchers demonstrated by two independent experimental methods that the major mineral of Earth's lower mantle, magnesium silicate in the perovskite structure, melts at the pressure of the core–mantle boundary to produce a liquid whose density is greater than or equal to the mineral itself. This implies that a layer of partially molten mantle would be gravitationally stable over geologic times at the boundary, where seismologists have discovered anomalous features best explained by the presence of partial melt.

Two types of experiments were conducted: pressure-density experiments and shock temperature measurements. In the pressure-density experiments, the velocity of the projectile prior to impact and the velocity of the shock wave passing through the target after impact are measured using high-speed optical and x-ray photography. These measurements allow calculation of the pressure and density of the shocked target material. In shock temperature measurements, thermal emission from the shocked sample at visible and near-infrared wavelengths is monitored with a six-channel pyrometer, and the brightness and spectral shape are converted to temperature.

In both types of experiments, the shock wave takes about one ten-millionth of a second to pass through the dime-sized sample, and the velocity and optical emission measurements must resolve this extremely short duration event.

The pressure-density experiments yielded a surprising result. When the glass starting material is subjected to increasingly strong shocks, densities are at first consistent with the perovskite structure, and then a transition is made to a melt phase at a pressure of 1.1 million atmospheres. As expected for most materials under ordinary conditions, the melt phase is less dense than the solid. Shock compression of the crystal starting material, however, follows a lower temperature path, and the transition from perovskite shock states to molten shock states does not occur until a pressure of 1.7 million atmospheres is reached. At this pressure, the liquid appears to be 3 to 4 percent denser than the mineral. Like water and ice at ordinary pressure and 32 °F, under these high-pressure conditions the perovskite solid would float and the liquid would sink.

Just as the negative volume change on the melting of water ice is associated with a negative slope of the melting curve in pressure-temperature space (which is why ice-skating works-- the pressure of the skate blade transforms ice to water at a temperature below the ordinary freezing point), this result implies that the melting curve of perovskite should display a maximum temperature somewhere between 1.1 and 1.7 million atmospheres, and a negative slope at 1.7 million atmospheres. This implication of the pressure-density results was tested using shock temperature measurements. In a separate series of experiments on the same starting materials, analysis of the emitted light constrained the melting temperature at 1.1 million atmospheres to about 9,900 °F. However, at the higher pressure of 1.7 million atmospheres, the melting point is 8,500o F. This confirms that somewhere above 1.1 million atmospheres, the melting temperature begins to decrease with increasing pressure and the melting curve has a negative slope.

Taking the results of both the pressure-density and shock temperature experiments together confirms that the molten material may be neutrally or slightly negatively buoyant at the pressure of the base of the mantle, which is 1.35 million atmospheres. Molten perovskite would, however, still be much less dense than the molten iron alloy of the core. If the mantle were to melt near the core–mantle boundary, the liquid silicate could be gravitationally stable in place or could drain downwards and pond immediately above the core–mantle boundary. The work has been motivated by the 1995 discovery of ultralow velocity zones at the base of the Earth's mantle by Donald Helmberger, who is the Smits Family Professor of Geophysics and Planetary Science at Caltech, and Edward Garnero, who was then a Caltech graduate student and is now a professor at Arizona State University. These ultralow velocity zones (notably underneath the mid-Pacific region) appear to be 1-to-30-mile-thick layers of very low-seismic-velocity rock just above the interface between Earth's rocky mantle and the liquid core of the Earth, at a depth of 1,800 miles.

Helmberger and Garnero showed that, in this zone, seismic shear waves suffer a 30 percent decrease in velocity, whereas compressional wave speeds decrease by only 10 percent. This behavior is widely attributed to the presence of some molten material. Initially, many researchers assumed that this partially molten zone might represent atypical mantle compositions, such as a concentration of iron-bearing silicates or oxides with a lower melting point than ordinary mantle--about 7,200 oF at this pressure.

The new results, however, indicate that the melting temperature of normal mantle composition is low enough to explain melting in the ultralow velocity zones, and that this melt could coexist with residual magnesium silicate perovskite solids. Thus the new Caltech results indicate that no special composition is required to induce an ultralow velocity zone just above the core–mantle boundary or to allow it to remain there without draining away. The patchiness of the ultralow velocity zones suggests that Earth's lowermost mantle temperatures can be just hotter than, or just cooler than, the temperature that is required to initiate melting of normal mantle at a depth of 1,800 miles.

Writer: 
Robert Tindol
Writer: 

Pages

Subscribe to RSS - research_news