The Brain Can Make Errors in Reassembling the Color and Motion of Objects

PASADENA, Calif.—You're driving along in your car and catch a glimpse of a green SUV out of the corner of your eye. A few seconds later, you glance over, and to your surprise discover that the SUV is actually brown.

You may assume this is just your memory playing tricks on you, but new research from psychophysicists at the California Institute of Technology and the Helmholtz Institute in the Netherlands suggests that initial perceptions themselves can contain misassigned colors. This can happen in certain cases where the brain uses what it sees in the center of vision and then rearranges the colors in peripheral vision to match.

In an article appearing in this week's journal Nature, Caltech graduate student Daw-An Wu, Caltech professor of biology Shinsuke Shimojo, and Ryota Kanai of the Helmholtz Institute report that the color of an object can be misassigned even as observers are intently watching an ongoing event because of the way the brain combines the perceptions of motion and color. Because different parts of the brain are responsible for dealing with motion and color perception, mistakes in "binding" can occur, where the motion from one object is combined with the color of another object.

This is demonstrated when observers gaze steadily at a computer screen on which red and green dots are in upward and downward motion. In the center area of the screen, all the red dots are moving upward while all the green dots are moving downward.

Unknown to the observers, however, the researchers are able to control the motion of the red and green dots at the periphery of the screen. In other words, the red and green dots are moving in a certain direction in the center area of the screen, but their motion is partially or even wholly reversed on each side.

The observers show a significant tendency to mistake the motion of the red and green dots at the periphery. Even when the motion was completely reversed on the sides, the observers would see the same motion all across the screen.

According to Wu, the lead author of the paper, the design of the experiment exploits the fact that different parts of the brain are responsible for processing different visual features, such as motion and color. Further, the experiment shows that the brain can be tricked into binding the information back together incorrectly.

"This illusion confirms the existence of the binding problem the brain faces in integrating basic visual features of objects, " says Wu. "Here, the information is reintegrated incorrectly because the information in the center, where our vision is strongest, vetoes contradicting (but correct) information in the periphery."

The title of the article is "Steady-State Misbinding of Color and Motion."



Robert Tindol

Physicists Successful in Trapping Ultracold Neutrons at Los Alamos National Laboratory

PASADENA, Calif.—Free neutrons are usually pretty speedy customers, buzzing along at a significant fraction of the speed of light. But physicists have created a new process to slow neutrons down to about 15 miles per hour—the pace of a world-class mile runner—which could lead to breakthroughs in understanding the physical universe at its most fundamental level.

According to Brad Filippone, a physics professor at the California Institute of Technology, he and a group of colleagues from Caltech and several other institutions recently succeeded in collecting record-breaking numbers of ultracold neutrons at the Los Alamos Neutron Science Center. The new technique resulted in about 140 neutrons per cubic centimeter, and the number could be five times higher with additional tweaking of the apparatus.

"Our principal interest is in making precision measurements of fundamental neutron properties," says Filippone, explaining that a neutron has a half-life of only 15 minutes. In other words, if a thousand neutrons are trapped, five hundred will have broken down after 15 minutes into a proton, electron, and antineutrino.

Neutrons normally exist in nature in a much more stable state within the nuclei of atoms, joining the positively charged protons to make up most of the atom's mass. Neutrons become quite unstable if they are stripped from the nucleus, but the very fact that they decay so quickly can make them useful for various experiments.

The traditional way physicists obtained free neutrons was by trying to slow them down as they emerged from a nuclear reactor, making them bounce around in material to get rid of energy. This procedure worked fine for slowing down neutrons to a few feet per second, but that's still pretty fast. The new technique at Los Alamos National Laboratory involves a second stage of slowdown that is impractical near a nuclear reactor, but which works well at a nuclear accelerator where the event producing the neutrons is abrupt rather than ongoing. The process begins with smashing protons from the accelerator into a solid material like tungsten, which results in neutrons being knocked out of their nuclei.

The neutrons are then slowed down as they bounce around in a nearby plastic material, and then some of them are slowed much further if they happen to enter a birthday-cake-sized block of solid deuterium (or "heavy hydrogen") that has been cooled down to a temperature a few degrees above absolute zero.

When the neutrons enter the crystal latticework of the deuterium block, they can lose virtually all their energy, and emerge from the block at speeds so slow they can no longer zip right through the walls of the apparatus. The trapped ultracold neutrons bounce along the nickel walls of the apparatus and eventually emerge, where they can be collected for use in a separate experiment. According to Filippone, the extremely slow speeds of the neutrons are important in studying their decays at a minute level of detail. The fundamental theory of particle physics known as the Standard Model predicts a specific pattern in the neutron's decay, but if the ultracold neutron experiments were to reveal slightly different behavior, then physicists would have evidence of a new type of physics, such as supersymmetry. Future experiments could also exploit an inherent quantum limit of the ultracold neutrons to bounce no lower than about 15 microns on a flat surface—or about a fifth the width of a human hair. With a cleverly designed experiment, Filippone says, this limit could lead to better knowledge of gravitational interactions at very small distances. The next step for the experimenters is to return to Los Alamos in October. Then, they will use the ultracold neutrons to study the neutrons themselves. The research was supported by about $1 million funding from Caltech and the National Science Foundation.


Researchers demonstrate existenceof earthquake supershear phenomenon

PASADENA, Calif.--As if folks living in earthquake country didn't already have enough to worry about, scientists have now identified another rupture phenomenon that can occur during certain types of large earthquakes. The only question now is whether the phenomenon is good, bad, or neutral in terms of human impact.

Reporting in the March 19 issue of the journal Science, California Institute of Technology geophysics graduate student Kaiwen Xia, aeronautics and mechanical engineering professor Ares Rosakis, and geophysics professor Hiroo Kanamori have demonstrated for the first time that a very fast, spontaneously generated rupture known as "supershear" can take place on large strike-slip faults like the San Andreas. They base their claims on a laboratory experiment designed to simulate a fault rupture.

While calculations dating back to the 1970s have predicted that such supershear rupture phenomena may occur in earthquakes, seismologists only recently began assuming that supershear was real. The Caltech experiment is the first time that spontaneous supershear rupture has been conclusively identified in a controlled laboratory environment, demonstrating that super-shear fault rupture is a very real possibility rather than a mere theoretical construct.

In the lab, the researchers forced two plates of a special polymer material together under pressure and then initiated an "earthquake" by inserting a tiny wire into the interface, which is turned into an expanding plasma by the sudden discharge of an electrical pulse. By means of high-speed photography and laser light, the researchers photographed the rupture and the stress waves as they propagated through the material.

The data shows that, under the right conditions, the rupture propagates much faster than the shear speed in the plates, producing a shock-wave pattern, something like the Mach cone of a jet fighter breaking the sound barrier.

The split-second photography also shows that such ruptures may travel at about twice the rate that a rupture normally propagates along an earthquake fault. However, the ruptures do not reach supershear speeds until they have propagated a certain distance from the point where they originated. Based on the experiments, a theoretical model was developed by the researchers to predict the length of travel before the transition to supershear.

In the case of a strike-slip fault like the San Andreas, the lab results indicate that the rupture needs to rip along for about 100 kilometers and the magnitude must be about 7.5 or so before the rupture becomes supershear. Large earthquakes along the San Andreas tend to be at least this large if not larger, typically involving rupture lengths of about 300 to 400 kilometers.

"Judging from the experimental result, it would not be surprising if supershear rupture propagation occurs for large earthquakes on the San Andreas fault," said Kanamori.

Similar high-speed ruptures propagating along bimaterial interfaces in engineering composite materials have been experimentally observed in the past (by Rosakis and his group, reporting in an August 1999 issue of Science). These ruptures took place under impact loading; only in the current experiment have they been initiated in an earthquake-like set-up.

According to Rosakis, an expert in crack propagation, the new results show promise in using engineering techniques to better understand the physics of earthquakes and its human impact.

According to Kanamori, the human impact of the finding is still debatable. The most damaging effect of a strike-slip earthquake is believed to be caused by a pulse-like motion normal to the fault caused by the combined effect of the rupture and shear wave. The supershear rupture suppresses this pulse, which is good, but the persistent shock-wave (Mach wave) emitted by the supershear rupture enhances the fault-parallel component of motion (the ground motion that runs in the same direction that the plates slip) and could amplify the destructive power of ground motion, which is bad.

The outstanding question about supershear at this point is which of these two effects dominates. "This is still being debated," says Kanamori. "We're not committed to one view or the other." Only further laboratory-level experimentation can answer this question conclusively.

Several seismologists believe that supershear was exhibited in some large earthquakes, including those that occurred in Tibet in 2001 and in Alaska in 2002. Both earthquakes were located in a remote region and had little, if any, human impact, but analysis of the evidence shows that the fault rupture propagated much faster than would normally be expected, thus implying supershear.

Robert Tindol

Most distant object in solar system discovered; could be part of never-before-seen Oort cloud

PASADENA, Calif.--A planetoid more than eight billion miles from Earth has been discovered by researchers led by a scientist at the California Institute of Technology. The new planetoid is more than three times the distance of Pluto, making it by far the most distant body known to orbit the sun.

The planetoid is well beyond the recently discovered Kuiper belt and is likely the first detection of the long-hypothesized Oort cloud. With a size approximately three-quarters that of Pluto, it is very likely the largest object found in the solar system since the discovery of Pluto in 1930.

At this extreme distance from the sun, very little sunlight reaches the planetoid and the temperature never rises above a frigid 400 degrees below zero Farenheit, making it the coldest known location in the solar system. According to Mike Brown, Caltech associate professor of planetary astronomy and leader of the research team, "the sun appears so small from that distance that you could completely block it out with the head of a pin."

As cold as it is now, the planetoid is usually even colder. It approaches the sun this closely only briefly during the 10,500 years it takes to revolve around the sun. At its most distant, it is 84 billion miles from the sun (900 times Earth's distance from the sun), and the temperature plummets to just 20 degrees above absolute zero.

The discoverers---Brown and his colleagues Chad Trujillo of the Gemini Observatory and David Rabinowitz of Yale University--have proposed that the frigid planetoid be named "Sedna," after the Inuit goddess who created the sea creatures of the Arctic. Sedna is thought to live in an icy cave at the bottom of the ocean--an appropriate spot for the namesake of the coldest body known in the solar system.

The researchers found the planetoid on the night of November 14, 2003, using the 48-inch Samuel Oschin Telescope at Caltech's Palomar Observatory east of San Diego. Within days, the new planetoid was being observed on telescopes in Chile, Spain, Arizona, and Hawaii; and soon after, NASA's new Spitzer Space Telescope was trained on the distant object.

The Spitzer images indicate that the planetoid is no more than 1,700 kilometers in diameter, making it smaller than Pluto. But Brown, using a combination of all of the data, estimates that the size is likely about halfway between that of Pluto and that of Quaoar, the planetoid discovered by the same team in 2002 that was previously the largest known body beyond Pluto.

The extremely elliptical orbit of Sedna is unlike anything previously seen by astronomers, but it resembles in key ways the orbits of objects in a cloud surrounding the sun predicted 54 years ago by Dutch astronomer Jan Oort to explain the existence of certain comets. This hypothetical "Oort cloud" extends halfway to the nearest star and is the repository of small icy bodies that occasionally get pulled in toward the sun and become the comets seen from Earth.

However, Sedna is much closer than expected for the Oort cloud. The Oort cloud has been predicted to begin at a distance 10 times greater even than that of Sedna. Brown believes that this "inner Oort cloud" where Sedna resides was formed by the gravitational pull of a rogue star that came close to the sun early in the history of the solar system. Brown explains that "the star would have been close enough to be brighter than the full moon and it would have been visible in the daytime sky for 20,000 years." Worse, it would have dislodged comets further out in the Oort cloud, leading to an intense comet shower, which would have wiped out any life on Earth that existed at the time.

There is still more to be learned about this newest known member of the solar system. Rabinowitz says that he has indirect evidence that there may be a moon following the planetoid on its distant travels--a possibility that is best checked with the Hubble Space Telescope--and he notes that Sedna is redder than anything known in the solar system with the exception of Mars, but no one can say why. Trujillo admits, "We still don't understand what is on the surface of this body. It is nothing like what we would have predicted or what we can currently explain."

But the astronomers are not yet worried. They can continue their studies as Sedna gets closer and brighter for the next 72 years before it begins its 10,500-year trip out to the far reaches of the solar system and back again. Brown notes, "The last time Sedna was this close to the sun, Earth was just coming out of the last the last ice age; the next time it comes back, the world might again be a completely different place."

Robert Tindol

Researchers discover fundamental scaling rule that differentiates primate and carnivore brains

PASADENA, Calif.--Everybody from the Tarzan fan to the evolutionary biologist knows that our human brain is more like a chimpanzee's than a dog's. But is our brain also more like a tiny lemur's than a lion's?

In one previously unsuspected way, the answer is yes, according to neuroscientists at the California Institute of Technology. In the current issue of the Proceedings of the National Academy of Sciences (PNAS), graduate student Eliot Bush and his professor, John Allman, report their discovery of a basic difference between the brains of all primates, from lemurs to humans, and all the flesh-eating carnivores, such as lions and tigers and bears.

The difference lies in the way the percentage of frontal cortex mass increases as the species gets larger. The frontal cortex is the portion of brain just behind the forehead that has long been associated with reasoning and other "executive" functions. In carnivores, the frontal cortex becomes proportionately larger as the entire cortex of the individual species increases in size--in other words, a lion that has a cortex twice the size of another carnivore's also has a frontal cortex twice the size.

By contrast, primates like humans and apes tend to have a frontal cortex that gets disproportionately larger as the overall cortex increases in size. This phenomenon is known as "hyperscaling," according to Bush, the lead author of the journal article.

What this says about the human relationship to the tiny lemurs of Madagascar is that the two species likely share a developmental or structural quirk, along with all the other primates, that is absent in all the carnivores, Bush explains. "The fact that humans have a large frontal cortex doesn't necessarily mean that they are special; relatively large frontal lobes have developed independently in aye-ayes among the lemurs and spider monkeys among the New World monkeys."

Bush and Allman reached their conclusions by taking the substantial histological data from the comparative brain collection at the University of Wisconsin at Madison. The collection, accumulated over many years by neuroscientist Wally Welker, comprises painstaking data taken from well over 100 species.

Bush and Allman's innovation was taking the University of Wisconsin data and running it through special software that allowed for volume estimations of the various structures of the brain in each species. Their results compared 43 mammals (including 25 primates and 15 carnivores), which allowed them to make very accurate estimations of the hyperscaling (or the lack thereof) in the frontal cortex.

The results show that in primates the ratio of frontal cortex to the rest of the cortex is about three times higher in a large primate than in a small one. Carnivores don't have this kind of systematic variation.

The hyperscaling mechanism is genetic, and was presumably present when the primates first evolved. "Furthermore, it is probably peculiar to primates," says Allman, who is Hixon Professor of Neurobiology at Caltech.

The next step will be to look at the developmental differences between the two orders of mammals by looking at gene expression differences. Much of this data is already available through the intense efforts in recent years to acquire the complete genomes of various species. The human genome, for example, is already complete, and the chimp genome is nearly so.

"We're interested in looking for genes involved in frontal cortex development. Changes in these may help explain how primates came to be different from other mammals," Bush says.

At present, the researchers have no idea what the difference is at the molecular level, but with further study they should be able to make this determination, Allman says. "It's doable."

The article is titled "The scaling of frontal cortex in primates and carnivores." For a copy of the article, contact Jill Locantore, PNAS communications specialist, at 202-334-1310, or e-mail her at

The PNAS Web site is at

For more information on Bush and Allman's research, go to the Web site


Robert Tindol

Planetary scientists find planetoid in Kuiper Belt; could be biggest yet discovered

PASADENA, Calif.—Planetary scientists at the California Institute of Technology and Yale University on Tuesday night discovered a new planetoid in the outer fringes of the solar system.

The planetoid, currently known only as 2004 DW, could be even larger than Quaoar--the current record holder in the area known as the Kuiper Belt--and is some 4.4 billion miles from Earth.

According to the discoverers, Caltech associate professor of planetary astronomy Mike Brown and his colleagues Chad Trujillo (now at the Gemini North observatory in Hawaii), and David Rabinowitz of Yale University, the planetoid was found as part of the same search program that discovered Quaoar in late 2002. The astronomers use the 48-inch Samuel Oschin Telescope at Palomar Observatory and the recently installed QUEST CCD camera built by a consortium including Yale and the University of Indiana, to systematically study different regions of the sky each night.

Unlike Quaoar, the new planetoid hasn't yet been pinpointed on old photographic plates or other images. Because its orbit is therefore not well understood yet, it cannot be given an official name.

"So far we only have a one-day orbit," said Brown, explaining that the data covers only a tiny fraction of the orbit the object follows in its more than 300-year trip around the sun. "From that we know only how far away it is and how its orbit is tilted relative to the planets."

The tilt that Brown has measured is an astonishingly large 20 degrees, larger even than that of Pluto, which has an orbital inclination of 17 degrees and is an anomaly among the otherwise planar planets.

The size of 2004 DW is not yet certain; Brown estimates a size of about 1,400 kilometers, based on a comparison of the planetoid's luminosity with that of Quaoar. Because the distance of the object can already be calculated, its luminosity should be a good indicator of its size relative to Quaoar, provided the two objects have the same albedo, or reflectivity.

Quaoar is known to have an albedo of about 10 percent, which is slightly higher than the reflectivity of our own moon. Thus, if the new object is similar, the 1,400-kilometer estimate should hold. If its albedo is lower, then it could actually be somewhat larger; or if higher, smaller.

According to Brown, scientists know little about the albedos of objects this large this far away, so the true size is quite uncertain. Researchers could best make size measurements with the Hubble Space Telescope or the newer Spitzer Space Telescope. The continued discovery of massive planetoids on the outer fringe of the solar system is further evidence that objects even farther and even larger are lurking out there. "It's now only a matter of time before something is going to be discovered out there that will change our entire view of the outer solar system," Brown says.

The team is working hard to uncover new information about the planetoid, which they will release as it becomes available, Brown adds. Other telescopes will also be used to better characterize the planetoid's features.

Further information is at the following Web site:

Robert Tindol

Researchers Using Hubble and Keck Telescopes Find Farthest Known Galaxy in the Universe

PASADENA, California--The farthest known object in the universe may have been discovered by a team of astrophysicists using the Keck and Hubble telescopes. The object, a galaxy behind the Abell 2218 cluster, may be so far from Earth that its light would have left when the universe was just 750 million years old.

The discovery demonstrates again that the technique known as gravitational lensing is a powerful tool for better understanding the origin of the universe. Via further applications of this remarkable technique, astrophysicists may be able to better understand the mystery of how the so-called "Dark Ages" came to an end.

According to California Institute of Technology astronomer Jean-Paul Kneib, who is the lead author reporting the discovery in a forthcoming article in the Astrophysical Journal, the galaxy is most likely the first detected close to a redshift of 7.0, meaning that it is rushing away from Earth at an extremely high speed due to the expansion of the universe. The distance is so great that the galaxy's ultraviolet light has been stretched to the point of being observed at infrared wavelengths.

The team first detected the new galaxy in a long exposure of the Abell 2218 cluster taken with the Hubble Space Telescope's Advanced Camera for Surveys. Analysis of a sequence of Hubble images indicate a redshift of at least 6.6, but additional work with the Keck Observatory's 10-meter telescopes suggests that the astronomers have found an object whose redshift is close to 7.0.

Redshift is a measure of the factor by which the wavelength of light is stretched by the expansion of the universe. The greater the shift, the more distant the object and the earlier it is being seen in cosmic history.

"As we were searching for distant galaxies magnified by Abell 2218, we detected a pair of strikingly similar images whose arrangement and color indicated a very distant object," said Kneib. "The existence of two images of the same object indicated that the phenomenon of gravitational lensing was at work."

The key to the new discovery is the effect the Abell 2218 cluster's gigantic mass has on light passing by it. As a consequence of Einstein's theory of relativity, light is bent and can be focused in a predictable way due to the warpage of space-time near massive objects. In this case the phenomenon actually magnifies and produces multiple images of the same source. The new source in Abell 2218 is magnified by a factor of 25.

The role of gravitational lensing as a useful phenomenon in cosmology was first pointed out by the Caltech astronomer Fritz Zwicky in 1937, who even suggested it could be used to discover distant galaxies that would otherwise be too faint to be seen.

"The galaxy we have discovered is extremely faint, and verifying its distance has been an extraordinarily challenging adventure," Kneib added. "Without the magnification of 25 afforded by the foreground cluster, this early object could simply not have been identified or studied in any detail with presently available telescopes. Indeed, even with aid of the cosmic lens, our study has only been possible by pushing our current observatories to the limits of their capabilities."

Using the unique combination of the high resolution of Hubble and the magnification of the cosmic lens, the researchers estimate that the galaxy is small--perhaps measuring only 2,000 light-years across—but forming stars at an extremely high rate.

An intriguing property of the new galaxy is the apparent lack of the typically bright hydrogen emission seen in many distant objects. Also, its intense ultraviolet signal is much stronger than that seen in later star-forming galaxies, suggesting that the galaxy may be composed primarily of massive stars.

"The unusual properties of this distant source are very tantalizing because, if verified by further study, they could represent those expected for young stellar systems that ended the dark ages," said Richard Ellis, Steele Family Professor of Astronomy, and a coauthor of the article.

The term "Dark Ages" was coined by the British astronomer Sir Martin Rees to signify the period in cosmic history when hydrogen atoms first formed but stars had not yet had the opportunity to condense and ignite. Nobody is quite clear how long this phase lasted, and the detailed study of the cosmic sources that brought this period to an end is a major goal of modern cosmology.

The team plans to continue the search for additional extremely distant galaxies by looking through other cosmic lenses in the sky.

"Estimating the abundance and characteristic properties of sources at early times is particularly important in understanding how the Dark Ages came to an end," said Mike Santos, a former Caltech graduate student involved in the discovery and now a postdoctoral researcher at the Institute of Astronomy in Cambridge, England. "We are eager to learn more by finding further examples, although it will no doubt be challenging."

The Caltech team reporting on the discovery consists of Kneib, Ellis, Santos, and Johan Richard. Kneib and Richard are also affiliated with the Observatoire Midi-Pyrenees of Toulouse, France. Santos is also at the Institute of Astronomy, in Cambridge.

The research was funded in part by NASA.

The W. M. Keck Observatory is managed by the California Association for Research in Astronomy, a scientific partnership between the California Institute of Technology, the University of California, and NASA. For more information, visit the observatory online at


Zombie Behaviors Are Part of Everyday Life, According to Neurobiologists

PASADENA, Ca.--When you're close to that woman you love this Valentine's Day, her fragrance may cause you to say to yourself, "Hmmm, Chanel No. 5," especially if you're the suave, sophisticated kind. Or if you're more of a missing link, you may even say to yourself, "Me want woman." In either case, you're exhibiting a zombie behavior, according to the two scientists who pioneered the scientific study of consciousness.

Longtime collaborators Christof Koch and Francis Crick (of DNA helix fame) think that "zombie agents"--that is, routine behaviors that we perform constantly without even thinking--are so much a central facet of human consciousness that they deserve serious scientific attention. In a new book titled The Quest for Consciousness: A Neurobiological Approach, Koch writes that interest in the subject of zombies has nothing to do with fiction, much less the supernatural. Crick, who for the last 13 years has collaborated with Koch on the study of consciousness, wrote the foreword of the book.

The existence of zombie agents highlights the fact that much of what goes on in our heads escapes awareness. Only a subset of brain activity gives rise to conscious sensations, to conscious feelings. "What is the difference between neuronal activity associated with consciousness and activity that bypasses the conscious mind?" asks Koch, a professor at the California Institute of Technology and head of the Computation and Neural Systems program.

Zombie agents include everything from keeping the body balanced, to unconsciously estimating the steepness of a hill we are about to climb, to driving a car, riding a bike, and performing other routine yet complex actions. We humans couldn't function without zombie agents, whose key advantage is that reaction times are kept to a minimum. For example, if a pencil is rolling off the table, we are quite able to grab it in midair, and we do so by executing an extremely complicated set of mental operations. And zombie agents might also be involved, by way of smell, in how we choose our sexual partners.

"Zombie agents control your eyes, hands, feet, and posture, and rapidly transduce sensory input into stereotypical motor output," writes Koch. "They might even trigger aggressive or sexual behavior when getting a whiff of the right stuff.

"All, however, bypass consciousness," Koch adds. "This is the zombie in you."

Zombie actions are but one of a number of topics that Koch and Crick have investigated since they started working together on the question of the brain basis of consciousness. Much of the book concerns perceptual experiments in normal people, patients, monkeys, and mice, that address the neuronal underpinnings of thoughts and actions.

As Crick points out in his foreword, consciousness is the major unsolved problem in biology. The Quest for Consciousness describes Koch and Crick's framework for coming to grips with the ancient mind-body problem. At the heart of their framework is discovering and characterizing the neuronal correlates of consciousness, the subtle, flickering patterns of brain activity that underlie each and every conscious experience.

The Quest for Consciousness: A Neurobiological Approach will be available in bookstores on February 27. For more information, see For review copies, contact Ben Roberts at Roberts & Company Publishers at (303) 221-3325, or send an e-mail to

Robert Tindol

Caltech Engineers Design a Revolutionary Radar Chip

PASADENA, Calif. -- Imagine driving down a twisty mountain road on a dark foggy night. Visibility is near-zero, yet you still can see clearly. Not through your windshield, but via an image on a screen in front of you.

Such a built-in radar system in our cars has long been in the domain of science fiction, as well as wishful thinking on the part of commuters. But such gadgets could become available in the very near future, thanks to the High Speed Integrated Circuits group at the California Institute of Technology.

The group is directed by Ali Hajimiri, an associate professor of electrical engineering. Hajimiri and his team have used revolutionary design techniques to build the world's first radar on a chip--specifically, they have implemented a novel antenna array system on a single, silicon chip.

Hajimiri notes, however, that calling it a "radar on a chip" is a bit misleading because it's not just radar. Having essentially redesigned a computer chip from the ground up, the technology is revolutionary enough to be used for a wide range of applications.

The chip can, for example, serve as a wireless, high-frequency communications link, providing a low-cost replacement for the optical fibers that are currently used for ultrafast communications. Hajimiri's chip runs at 24 GHz (24 billion cycles in one second), an extremely high speed, which makes it possible to transfer data wirelessly at speeds available only to the backbone of the Internet (the main network of connections that carry most of the traffic on the Internet).

Other possible uses:

* In cars, an array of these chips--one each in the front, the back, and each side--could provide a smart cruise control, one that wouldn't just keep the pedal to the metal, but would brake for a slowing vehicle ahead of you, avoid a car that's about to cut you off, or dodge an obstacle that suddenly appears in your path.

While there are other radar systems in development for cars, they consist of a large number of modules that use more exotic and expensive technologies than silicon. Hajimiri's chip could prove superior because of its fully integrated nature. That allows it to be manufactured at a substantially lower price, and makes the chip more robust in response to design variations and changes in the environment, such as heat and cold.

* The chip could serve as the brains inside a robot capable of vacuuming your house. While such appliances now exist, a vacuum using Hajimiri's chip as its brain would clean without constantly bumping into everything, have the sense to stay out of your way, and never suck up the family cat.

* A chip the size of a thumbnail could be placed on the roof of your house, replacing the bulky satellite dish or the cable connections for your DSL. Your picture could be sharper, and your downloads lightning fast.

* A collection of these chips could form a network of sensors that would allow the military to monitor a sensitive area, eliminating the need for constant human patrolling and monitoring.

In short, says Hajimiri, the technology will be useful for numerous applications, limited only by an entrepreneur's imagination.

Perhaps the best thing of all is that these chips are cheap to manufacture, thanks to the use of silicon as the base material. "Traditional radar costs a couple of million dollars," says Hajimiri. "It's big and bulky, and has thousands of components. This integration in silicon allows us to make it smaller, cheaper, and much more widespread."

Silicon is the ubiquitous element used in numerous electronic devices, including the microprocessor inside our personal computers. It is the second most abundant element in the earth's crust (after oxygen), and components made of silicon are cheap to make and are widely manufactured. "In large volumes, it will only cost a few dollars to manufacture each of these radar chips," he says.

"The key is that we can integrate the whole system into one chip that can contain the entire high-frequency analog and high-speed signal processing at a low cost," says Hajimiri. "It's less powerful than the conventional radar used for aviation, but, since we've put it on a single, inexpensive chip, we can have a large number of them, so they can be ubiquitous."

Hajimiri's radar chip, with both a transmitter and receiver (more accurately, a phased-array transceiver) works much like a conventional array of antennas. But unlike conventional radar, which involves the mechanical movement of hardware, this chip uses an electrical beam that can steer the signal in a given direction in space without any mechanical movement.

For communications systems, this ability to steer a beam will provide a clear signal and will clear up the airwaves. Cell phones, for example, radiate their signal omnidirectionally. That's what contributes to interference and clutter in the airwaves. "But with this technology you can focus the beams in the desired direction instead of radiating power all over the place and creating additional interference," says Hajimiri. "At the same time you're maintaining a much higher speed and quality of service."

Hajimiri's research interest is in designing integrated circuits for both wired and wireless high-speed communications systems. (An integrated circuit is a computer chip that serves multiple functions.) Most silicon chips have a single circuit or signal path that a signal will follow; Hajimiri's innovation lies in multiple, parallel circuits on a chip that operate in harmony, thus dramatically increasing speed and overcoming the speed limitations that are inherent with silicon.

Hajimiri says there's already a lot of buzz about his chip, and he hasn't even presented a peer-reviewed paper yet. He'll do so next week at the International Solid State Circuit Conference in San Francisco.

Note to editors: Color pictures of the tiny chip, juxtaposed against a penny, are available.

Media Contact: Mark Wheeler (626) 395-8733

Visit the Caltech Media Relations website at


New Tool for Reading a Molecule's Blueprints

Just as astronomers image very large objects at great distances to understand what makes the universe tick, biologists and chemists need to image very small molecules to understand what makes living systems tick.

Now this quest will be enhanced by a $14,206,289 gift from the Gordon and Betty Moore Foundation to the California Institute of Technology, which will allow scientists at Caltech and Stanford University to collaborate on the building of a molecular observatory for structural molecular biology.

The observatory, to be built at Stanford, is a kind of ultrapowerful X-ray machine that will enable scientists from both institutions and around the world to "read" the blueprints of so-called macromolecules down at the level of atoms. Macromolecules, large molecules that include proteins and nucleic acids (DNA and RNA), carry out the fundamental cellular processes responsible for biological life. By understanding their makeup, scientists can glean how they interact with each other and their surroundings, and subsequently determine how they function. This knowledge, while of inherent importance to the study of biology, could also have significant practical applications, including the design of new drugs.

The foundation of this discovery process, says Doug Rees, a Caltech Professor of Chemistry and an investigator for the Howard Hughes Medical Institute, and one of the principal investigators of the project, is that "if you want to know how something works, you first need to know what it looks like.

"That's why we're excited about the molecular observatory," he says, "because it will allow us to push the boundary of structural biology to define the atomic-scale blueprints of macromolecules that are responsible for these critical cellular functions. This will include the technically demanding analyses of challenging biochemical targets, such as membrane proteins and large macromolecular assemblies, that can only be achieved using such a high-intensity, state of the art observatory."

The primary experimental approach for structural molecular biology is the use of X-ray beams, which can illuminate the three-dimensional structure of a molecule. It does this by blasting a beam of x-rays through a crystallized sample of the molecule, then analyzing the pattern of the scattered beam. According to Keith Hodgson, a Stanford professor and director of the facility where the new observatory will be build, "synchrotrons are powerful tools for such work, because they generate extremely intense, focused X-ray radiation many millions of times brighter than available from a normal x-ray tube." Synchrotron radiation is comprised of the visible and invisible forms of light produced by electrons circulating in a storage ring at nearly the speed of light. Part of the spectrum of synchrotron radiation lies in the x-ray region; the radiation is used to investigate various forms of matter at the molecular and atomic scales, using approaches in part pioneered by Linus Pauling during his time as a faculty member at Caltech in the fifties and sixties.

The new observatory, in technical terms called a beam line, will make use of the extremely bright x-rays produced by a newly installed advanced electron accelerator that is located at Stanford's Synchrotron Radiation Laboratory (SSRL) on the Stanford Linear Accelerator site (SLAC). The exceptional quality and brightness of the x-ray light from this new accelerator is perfectly suited to the study of complicated biological systems. The Foundation gift will be used by Caltech and the SSRL to design and construct a dedicated beam line at SSRL for structural molecular biology research. The x-ray source itself will be based upon a specialized device (called an in-vacuum undulator) that will produce the x-rays used to illuminate the crystalline samples. Specially designed instruments will allow fully automated sample manipulation via a robotic system and integrated software controls. Internet-based tools will allow researchers at Caltech or remote locations to control the experiments and analyze data in real time. An on-campus center to be built will facilitate access by faculty and students to the new beam line.

Knowing the molecular-scale blueprint of macromolecules will ultimately help answer such fundamental questions as "How are the chemical processes underlying life achieved and regulated in cells?" "How does a motor or pump work that is a millionth of a centimeter in size?" "How is information transmitted in living systems?"

"The construction of a high-intensity, state-of-the-art beam line at Stanford, along with an on-campus center here at Caltech to assist in these applications, will complement developments in cryo-electron microscopy that are underway on campus, also made possible through the support of the Gordon and Betty Moore Foundation," notes Caltech provost Steven Koonin.

The SSRL at Stanford is a national user facility operated by the U.S. Department of Energy's Office of Science. "I would like to thank the Gordon and Betty Moore Foundation for this generous gift [to Caltech]," said Dr. Raymond L. Orbach, director of the Office of Science, which oversees the SLAC and the SSRL. "This grant will advance the frontiers of biological science in very important and exciting ways. It also launches a dynamic collaboration between two great universities, Caltech and Stanford, at a Department of Energy research facility, thereby enhancing the investment of the federal government."

The Gordon and Betty Moore Foundation was established in November 2000, by Intel co-founder Gordon Moore and his wife Betty. The Foundation funds outcome-based projects that will measurably improve the quality of life by creating positive outcomes for future generations. Grantmaking is concentrated in initiatives that support the Foundation's principal areas of concern: environmental conservation, science, higher education, and the San Francisco Bay Area.

Exclude from News Hub: