Single-Cell Recognition: A Halle Berry Brain Cell

Embargoed for release at 10 a.m., PDT, June 22, 2005

PASADENA, Calif. - World travelers can instantly identify the architectural sails of the Sydney Opera House, while movie aficionados can immediately I.D. Oscar-winning actress Halle Berry beneath her Catwoman costume or even in an artist's caricature. But how does the human brain instantly translate varied and abstract visual images into a single and consistently recognizable concept?

Now a research team of neuroscientists from the California Institute of Technology and UCLA has found that a single neuron can recognize people, landmarks, and objects--even letter strings of names ("H-A-L-L-E-B-E-R-R-Y"). The findings, reported in the current issue of the journal Nature, suggest that a consistent, sparse, and explicit code may play a role in transforming complex visual representations into long-term and more abstract memories.

"This new understanding of individual neurons as 'thinking cells' is an important step toward cracking the brain's cognition code," says co-senior investigator Itzhak Fried, a professor of neurosurgery at the David Geffen School of Medicine at UCLA, and a professor of psychiatry and biobehavioral sciences at the Semel Institute for Neuroscience and Human Behavior, also at UCLA. "As our understanding grows, we one day may be able to build cognitive prostheses to replace functions lost due to brain injury or disease, perhaps even for memory."

"Our findings fly in the face of conventional thinking about how brain cells function," adds Christof Koch, the Lois and Victor Troendle Professor of Cognitive and Behavioral Biology and professor of computation and neural systems at Caltech, and the other co-senior investigator. "Conventional wisdom views individual brain cells as simple switches or relays. In fact, we are finding that neurons are able to function more like a sophisticated computer."

The study is an example of the power of neurobiological research using data drawn directly from inside a living human brain. Most neurobiological research involves animals, postmortem tissue, or functional brain imaging in magnetic scanners. In contrast, these researchers draw data directly from the brains of eight consenting clinical patients with epilepsy at the UCLA Medical Center, wiring them with intracranial electrodes to identify the seizure origin for potential surgical treatment.

The team recorded responses from the medial temporal lobe, which plays a major role in human memory and is one of the first regions affected in patients with Alzheimer's disease. Responses by individual neurons appeared on a computer screen as spikes on a graph.

In the initial recording session, subjects viewed a large number of images of famous people, landmark buildings, animals, objects, and other images chosen after an interview. To keep the subjects focused, researchers asked them to push a computer key to indicate whether the image was a person. After determining which images prompted a significant response in at least one neuron, additional sessions tested response to three to eight variations of each of those images.

Responses varied with the person and stimulus. For example, a single neuron in the left posterior hippocampus of one subject responded to 30 out of 87 images. It fired in response to all pictures of actress Jennifer Aniston, but not at all, or only very weakly, to other famous and non-famous faces, landmarks, animals, or objects. The neuron also (and wisely, it turns out) did not respond to pictures of Jennifer Aniston together with actor Brad Pitt.

In another patient, pictures of Halle Berry activated a neuron in the right anterior hippocampus, as did a caricature of the actress, images of her in the lead role of the film Catwoman, and a letter sequence spelling her name. In a third subject, a neuron in the left anterior hippocampus responded to pictures of the landmark Sydney Opera House and Baha'í Temple, and also to the letter string "Sydney Opera," but not to other letter strings, such as "Eiffel Tower."

In addition to Koch and Fried, the research team included Rodrigo Quian-Quiroga of Caltech and UCLA, Leila Reddy of Caltech, and Gabriel Kreiman of the Massachusetts Institute of Technology.

The research was funded by grants from the National Institute of Neurological Disorders and Stroke, National Institute of Mental Health, the National Science Foundation, the Defense Advanced Research Projects Agency, the Office of Naval Research, the W. M. Keck Foundation Fund for Discovery in Basic Medical Research, a Whiteman fellowship, the Gordon Moore Foundation, the Sloan Foundation, and the Swartz Foundation for Computational Neuroscience.

MEDIA CONTACTS: Mark Wheeler, Caltech (626) 395-8733

Dan Page, UCLA (310) 794-2265


New Propane-Burning Fuel Cell Could Energize a Future Generation of Small Electrical Devices

PASADENA, Calif.--Engineers have created a propane-burning fuel cell that's almost as small as a watch battery, yet many times higher in power density. Led by Sossina Haile of the California Institute of Technology, the team reports in the June 9 issue of the journal Nature that two of the cells have sufficient power to drive an MP3 player. If commercialized, such a fuel cell would have the advantage of driving the MP3 player for far longer than the best lithium batteries available.

According to Haile, who is an associate professor of materials science and of chemical engineering at Caltech, the new technology was made possible by a couple of key breakthroughs in fuel-cell technology. Chief among these was a novel method of getting the fuel cell to generate enough internal heat to keep itself hot, a requirement for producing power.

"Fuel cells have been done on larger scales with hydrocarbon fuels, but small fuel cells are challenging because it's hard to keep them at the high temperatures required to get the hydrocarbon fuels to react," Haile says. "In a small device, the surface-to-volume ratio is large, and because heat is lost through the surface that is generated in the volume, you have to use a lot of insulation to keep the cell hot. Adding insulation takes away the size advantage."

The new technology tackles this problem by burning just a bit of the fuel to generate heat to maintain the fuel cell temperature. The device could probably use a variety of hydrocarbon fuels, but propane is just about perfect because it is easily compressible into a liquid and because it instantly becomes a vapor when it is released. That's exactly what makes it ideal for your backyard barbecue grill.

"Actually, there are three advances that make the technology possible," Haile says. "The first is to make the fuel cells operate with high power outputs at lower temperatures than conventional hydrocarbon-burning fuel cells. The second is to use a single-chamber fuel cell that has only one inlet for premixed oxygen and fuel and a single outlet for exhaust, which makes for a very simple and compact fuel cell system. These advances were achieved here at Caltech."

"The third involves catalysts developed at Northwestern University that cause sufficient heat release to sustain the temperature of the fuel cell." In addition, a linear counter-flow heat exchanger makes sure that the hot gases exiting from the fuel cell transfer their heat to the incoming cold inlet gases.

Although the technology is still experimental, Haile says that future collaborations with design experts should tremendously improve the fuel efficiency. In particular, she and her colleagues are working with David Goodwin, a professor of mechanical engineering and applied physics at Caltech, on design improvements. One such improvement will be to incorporate compact "Swiss roll" heat exchangers, produced by collaborator Paul Ronney at USC.

As for applications, Haile says that the sky is literally the limit. Potential applications could include the tiny flying robots in which the defense funding agency DARPA has shown so much interest in recent years. For everyday uses, the fuel cells could also provide longer-lasting sources of power for laptop computers, television cameras, and pretty much any other device in which batteries are too heavy or too short-lived.

In addition to Haile, the other authors are Zongping Shao, a postdoctoral scholar in Haile's lab; Jeongmin Ahn and Paul D. Ronney, both of USC; and Zhongliang Zhan and Scott A. Barnett, both of Northwestern.

Robert Tindol
Exclude from News Hub: 

Andromeda Galaxy Three Times Bigger in Diameter Than Previously Thought

MINNEAPOLIS--The lovely Andromeda galaxy appeared as a warm fuzzy blob to the ancients. To modern astronomers millennia later, it appeared as an excellent opportunity to better understand the universe. In the latter regard, our nearest galactic neighbor is a gift that keeps on giving.

Scott Chapman, from the California Institute of Technology, and Rodrigo Ibata, from the Observatoire Astronomique de Strasbourg in France, have led a team of astronomers in a project to map out the detailed motions of stars in the outskirts of the Andromeda galaxy. Their recent observations with the Keck telescopes show that the tenuous sprinkle of stars extending outward from the galaxy are actually part of the main disk itself. This means that the spiral disk of stars in Andromeda is three times larger in diameter than previously estimated.

At the annual summer meeting of the American Astronomical Society today, Chapman will outline the evidence that there is a vast, extended stellar disk that makes the galaxy more than 220,000 light-years in diameter. Previously, astronomers looking at the visible evidence thought Andromeda was about 70,000 to 80,000 light-years across. Andromeda itself is about 2 million light-years from Earth.

The new dimensional measure is based on the motions of about 3,000 of the stars some distance from the disk that were once thought to be merely the "halo" of stars in the region and not part of the disk itself. By taking very careful measurements of the "radial velocities," the researchers were able to determine precisely how each star was moving in relation to the galaxy.

The results showed that the outlying stars are sitting in the plane of the Andromeda disk itself and, moreover, are moving at a velocity that shows them to be in orbit around the center of the galaxy. In essence, this means that the disk of stars is vastly larger than previously known.

Further, the researchers have determined that the nature of the "inhomogeneous rotating disk"-in other words, the clumpy and blobby outer fringes of the disk-shows that Andromeda must be the result of satellite galaxies long ago slamming together. If that were not the case, the stars would be more evenly spaced.

Ibata says, "This giant disk discovery will be very hard to reconcile with computer simulations of forming galaxies. You just don't get giant rotating disks from the accretion of small galaxy fragments."

The current results, which are the subject of two papers already available and a third yet to be published, are made possible by technological advances in astrophysics. In this case, the Keck/DEIMOS multi-object spectrograph affixed to the Keck II Telescope possesses the mirror size and light-gathering capacity to image stars that are very faint, as well as the spectrographic sensitivity to obtain highly accurate radial velocities.

A spectrograph is necessary for the work because the motion of stars in a faraway galaxy can only be detected within reasonable human time spans by inferring whether the star is moving toward us or away from us. This can be accomplished because the light comes toward us in discrete frequencies due to the elements that make up the star.

If the star is moving toward us, then the light tends to cram together, so to speak, making the light higher in frequency and "bluer." If the star is moving away from us, the light has more breathing room and becomes lower in frequency and "redder."

If stars on one side of Andromeda appear to be coming toward us, while stars on the opposite side appear to be going away from us, then the stars can be assumed to orbit the central object.

The extended stellar disk has gone undetected in the past because stars that appear in the region of the disk could not be known to be a part of the disk until their motions were calculated. In addition, the inhomogeneous "fuzz" that makes up the extended disk does not look like a disk, but rather appears to be a fragmented, messy halo built up from many previous galaxies' crashing into Andromeda, and it was assumed that stars in this region would be going every which way.

"Finding all these stars in an orderly rotation was the last explanation anyone would think of," says Chapman.

On the flip side, finding that the bulk of the complex structure in Andromeda's outer region is rotating with the disk is a blessing for studying the true underlying stellar halo of the galaxy. Using this new information, the researchers have been able to carefully measure the random motions of stars in the stellar halo, probing its mass and the form of the elusive dark matter that surrounds it.

Although the main work was done at the Keck Observatory, the original images that posed the possibility of an extended disk were taken with the Isaac Newton Telescope's Wide-Field Camera. The telescope, located in the Canary Islands, is intended for surveys, and in the case of this study, served well as a companion instrument.

Chapman says that further work will be needed to determine whether the extended disk is merely a quirk of the Andromeda galaxy, or is perhaps typical of other galaxies.

The main paper with which today's AAS news conference is concerned will be published this year in The Astrophysical Journal with the title "On the Accretion Origin of a Vast Extended Stellar Disk Around the Andromeda Galaxy." In addition to Chapman and Ibata, the other authors are Annette Ferguson, University of Edinburgh; Geraint Lewis, University of Sydney; Mike Irwin, Cambridge University; and Nial Tanvir, University of Hertfordshire.



Robert Tindol

New Study Suggests That State Proposal Would Drive Electricity Prices Higher

PASADENA, Calif.-A new study, on the organization of the wholesale electricity market conducted by California Institute of Technology and Purdue University economists, suggests that a plan being considered by the California Energy Commission (CEC) to require electric utility companies to make public their procurement strategies would result in higher costs for utility customers.

The study uses new laboratory experimental techniques that were developed by Caltech Professor Charles Plott to study the intricate ways in which the basic laws of supply and demand work in different forms of market organization. Use of the techniques has spread rapidly around the world. One of the first to use laboratory experimental methods in economics was Caltech alumnus Vernon Smith, who was awarded the Nobel Prize for his work.

The proposal considered by the CEC requires that the utility company make available its needs for electricity, as dictated by its customers, to the companies from which the utilities must buy their electricity. While the scientific operations of the law of supply and demand are intricate and present many challenges to science, the principles operating in this case are rather transparent. Common sense tells us that there are situations in life where letting our competitors in on our game plan is a sure way of decreasing our advantage.

For example, if you're a quarterback, the best way to make sure the fans see your team score a bunch of exciting touchdowns is certainly not to invite opposing team members into your huddle. Just as you know you should withhold information from the other football team, you also know that you should hide your cards from your poker competitors, and that you should avoid telling the used-car salesman how much money you can spend on a car.

Surprisingly, this common-sense approach to withholding information in competitive situations is still open to debate in the world of resource regulation. The proposal currently being considered by the CEC would require an openness that may seem like a good idea, but the Caltech and Purdue research shows that it flies in the face of science.

"A presumption exists in regulatory circles that openness of economic processes promotes efficiency in protecting the public interest," says Charles Plott, who is the Harkness Professor of Economics and Political Science at Caltech. "That may be true in political and bureaucratic processes, but it's not true in markets."

Plott and Timothy Cason of Purdue are the authors of a new study in the journal Economic Inquiry showing that the forced disclosure of information is bad for consumers in utilities markets, and that scientific experiments back them up. Their work addresses, in part, the CEC's announcement that it "does not believe that the California ratepayers will be harmed by a more transparent system."

Plott says that this is a long-standing and fallacious assumption that contradicts the basic laws of supply and demand. Nonetheless, it seems to persist because of a confusion between the desire for information that is characteristic of regulators and the efficient workings of a market.

"At face value, openness sounds good," explains Plott. "The argument is that the public needs to know as much as possible, and that by knowing more information the public is better able to monitor a company's behavior.

"But this is just not true, and common sense eventually tells you so," Plott says. "If you think about it, forcing a utility to reveal information about its plans about procurement of power from the wholesale markets doesn't make any more sense than forcing one player to play cards face-up in a poker game."

But the science argues against such disclosure, too, Plott says. Laboratory results in Plotts's Caltech experimental economics lab shows that forcing the utilities to reveal confidential information regarding their energy demands to suppliers will lead to higher prices for the consumer.

In short, if the rules are changed, California consumers can expect to pay higher average prices for electricity, Plott says. The exact amount of the price increases will be dictated by events that are unknown to us now, but it is easy to imagine situations in which the price increases could be on the order of 7 to 8 percent higher, and it is just as easy to imagine circumstances in which the impact of the disclosures could increase prices two or three times more. Under no circumstances would the disclosure lower prices.

In the experiments described in the Economic Inquiry paper, the researchers set up the procedure so that the volunteer test subjects would be financially motivated. The experiments were rather complicated and involved, but the objective was to test the influence of the wholesale power supplier's possession of pertinent information on the eventual price.

The experimental work showed that the manipulation of information strongly controlled the movement of the pricing equilibrium such that perturbations always went to the informed side. In everyday English, this means that a lower price results when buyers in a competitive market are not forced to "tip their hands."

Or to put it another way, the current system that requires each competitor to guess what the other is doing will result in their trying to beat each other out for the lowest price. The party that benefits from the competition is the consumer.

On the other hand, disclosure of information results in a lack of competition. The likely result is a higher price for the consumer, which could be especially burdensome in the future if the supply and demand for power is as unpredictable as it has been in the last couple of years. In fact, the paper concludes, the disclosure of information could work even greater hardships on the public if demand is unpredictable.

The title of the paper is "Forced Information Disclosure and the Fallacy of Transparency in Markets." The paper will appear in an upcoming issue.




Robert Tindol

Caltech Neuroscientists Unlock Secrets of How the Brain Is Wired for Sex

PASADENA--There are two brain structures that a mouse just can't do without when it comes to hooking up with the mate of its dreams--and trying to stay off the lunch menu of the neighborhood cat. These are the amygdala, which is involved in the initial response to cues that signal love or war, and the hypothalamus, which coordinates the innate reproductive or defensive behaviors triggered by these cues.

Now, neuroscientists have traced out the wiring between the amygdala and hypothalamus, and think they may have identified the genes involved in laying down the wiring itself. The researchers have also made inroads in understanding how the circuitry works to make behavioral decisions, such as when a mouse is confronted simultaneously with an opportunity to reproduce and an imminent threat.

Reporting in the May 19 issue of the journal Neuron, David Anderson, Caltech's Roger W. Sperry Professor of Biology and a Howard Hughes Medical Institute investigator, his graduate student Gloria Choi, and their colleagues describe their discovery that the neural pathway between the amygdala and hypothalamus thought to govern reproductive behaviors is marked by a gene with the rather unromantic name of Lhx6.

For a confirmation that their work was on track, the researchers checked to see what the suspected neurons were doing when the mice were sexually aroused. In male mice, the smell of female mouse urine containing pheromones was already known to be a sexual stimulus, evoking such behaviors as ultrasonic vocalization, a sort of "courtship song." Therefore, the detection of neural activity in the pathway when the mouse smelled the pheromones was the giveaway.

The idea that Lhx6 actually specifies the wiring of the pathway is still based on inference, because when the researchers knocked out the gene, the mutation caused mouse embryos to die of other causes too early to detect an effect on brain wiring. But the Lhx6 gene encodes a transcription factor in a family of genes whose members are known to control the pathfinding of axons, which are tiny wires that jut out from neurons and send messages to other neurons.

The pathway between the amygdala and hypothalamus that is involved in danger avoidance appears to be marked by other genes in the same family, called Lhx9 and Lhx5. However, the function of the circuits marked by these factors is not as clear, because a test involving smells to confirm the pathways was more ambiguous than the one involving sexual attraction. The smell of a cat did not clearly light up Lhx9- or Lhx5-positive cells. Nevertheless, the fact that those cells are found in brain regions implicated in defensive behaviors suggests they might be involved in other forms of behaviors, such as aggression between male mice.

The researchers also succeeded in locating the part of the mouse brain where a circuit-overriding mechanism exists when a mouse is both exposed to a potential mate and perceives danger. This wiring is a place in the hypothalamus where the pathways involved in reproduction and danger avoidance converge. The details of the way the axons are laid down shows that a mouse is clearly hard-wired to get out of harm's way, even though a mating opportunity simultaneously presents itself.

"We also have a behavioral confirmation, because it is known that male mice 'sing' in an ultrasonic frequency when they're sexually attracted," Anderson explains. "But when they're exposed to danger signals like predator odors, they freeze or hide.

"When we exposed the mice to both cat odor and female urine simultaneously, the male mice stopped their singing, as we predicted from the wiring diagram," he says. "So the asymmetry in the cross-talk suggests that the system is prioritized for survival first, mating second."

The inevitable question is whether this applies to humans as well. Anderson's answer is that similarities are likely, and that the same genes may even be involved.

"The brains of mice and humans have both of these structures, and we, like mice, are likely to have some hard-wired circuits for reproductive behavior and for defense," he says. "So it's not unreasonable to assume that some of the genes involved in these behaviors in mice are also involved in humans."

However, humans can also make conscious decisions and override the hard-wired circuitry. For example, two teenagers locked in an amorous embrace in a theater can ignore a horrid monster on the screen and continue with the activity at hand. In real-life circumstances, they would be more inclined to postpone the groping until they were out of danger.

"We obviously have the conscious ability to interrupt the circuit-overriding mechanism, to see if the threat is really important," Anderson says.

Gloria Choi, a doctoral student in biology, did most of the lab work involved in the study. The other collaborators are Hongwei Dong and Larry Swanson, a professor at USC who in the past has comprehensively mapped the neural wiring of the rat brain, and Andrew Murphy, David Valenzuela, and George Yancopoulos at Regeneron Pharmaceuticals, in Tarrytown, New York, who generated the genetically modified mice using a new high-throughput system that they developed, called Velocigene.



Robert Tindol

Research on Sumatran Earthquakes Uncovers New Mysteries about Workings of Earth

PASADENA, Calif.--The Sumatra-Andaman earthquake of December 26 was an unmitigated human disaster. But three new papers by an international group of experts show that the huge data return could help scientists better understand extremely large earthquakes and the disastrous tsunamis that can be associated with them.

Appearing in a themed issue of this week's journal Science, the three papers are all co-authored by California Institute of Technology seismologists. The papers describe in unprecedented detail the rupture process of the magnitude-9 earthquake, the nature of the faulting, and the global oscillations that resulted when the earthquake "delivered a hammer blow to our planet." The work also shows evidence that the odd sequence of ground motions in the Andaman Islands will motivate geophysicists to further investigate the physical processes involved in earthquakes.

"For the first time it is possible to do a thorough seismological study of a magnitude-9 earthquake," says Hiroo Kanamori, who is the Smits Professor of Geophysics at Caltech and a co-author of all three papers. "Since the occurrence of similar great earthquakes in the 1960s, seismology has made good progress in instrumentation, theory, and computational methods, all of which allowed us to embark on a thorough study of this event."

"The analyses show that the Global Seismic Network, which was specifically designed to record such large earthquakes, performed exactly according to design standards," adds Jeroen Tromp, who is McMillan Professor of Geophysics and director of the Caltech Seismology Lab. "The network enables a broadband analysis of the rupture process, which means that there is considerable information over a broad range of wave frequencies, allowing us to study the earthquake in great detail.."

In fact, Kanamori points out, the data have already motivated tsunami experts to investigate how tsunamis are generated by seismic deformation. In the past, seismic deformation was treated as instantaneous uplift of the sea floor, but because of the extremely long rupture length (1200 km), slow deformation, and the large horizontal displacements as well as vertical deformation, the Sumatra-Andaman earthquake forced tsunami experts to rethink their traditional approach. Experts and public officials are now incorporating these details into modeling so that they can more effectively mitigate the human disaster of future tsunamis.

Another oddity contained in the data is the rate at which the ground moved in the Andaman Islands. Following the rapid seismic rupture, significant slip even larger than the co-seismic slip (in other words, the slip that occurred during the actual earthquake) continued beneath the islands over the next few days.

"We have never seen this kind of behavior," says Kanamori. "If slip can happen over a few days following the rapid co-seismic slip, then important hitherto unknown deformational processes in the Earth's crust must have been involved; this will be the subject of future investigations."

As for the "ringing" of Earth for literally weeks after the initial shock, the scientists say that the information will provide new insights into the planet's interior composition, mineralogy, and dynamics. In addition, the long-period free oscillations of such a large earthquake provide information on the earthquake itself.

The first of the papers is "The Great Sumatra-Andaman Earthquake of 26 December 2004. " In addition to Kanamori, the other authors are Thorne Lay (the lead author) and Steven Ward of UC Santa Cruz; Charles Ammon of Penn State; Meredith Nettles and Goan Estrom of Harvard; Richard Aster and Susan Bilek of the New Mexico Institute of Mining and Technology; Susan Beck of the University of Arizona; Michael Brudzinski of the University of Wisconsin and Miami University; Rhett Butler of the IRIS Consortium; Heather DeShon of the University of Wisconsin; Kenji Satake of the Geological Survey of Japan; and Stuart Sipkin of the US Geological Survey's National Earthquake Information Center.

The second paper is "Rupture Process of the 2004 Sumatra-Andaman Earthquake." The Caltech co-authors are Ji Chen, Sidao Ni, Vala Hjorleifsdottir, Hiroo Kanamori, and Donald Helmberger, the Smits Family Professor of Geological and Planetary Sciences.

The other authors are Charles Ammon (the lead author) of Penn State; David Robinson and Shamita Das of the University of Oxford; Thorne Lay of UC Santa Cruz; Hong-Kie Thio and Gene Ichinose of URS Corporation; Jascha Polet of the Institute for Crustal Studies; and David Wald of the National Earthquake Information Center.

The third paper is "Earth's Free Oscillations Excited by the 26 December 2004 Sumatra-Andaman Earthquake," of which Jeffrey Park of Yale University is lead author. The Caltech coauthors are Teh-Ruh Alex Song, Jeroen Tromp, and Hiroo Kanamori. The other authors are Emile Okal and Seth Stein of Northwestern University; Genevieve Roult and Eric Clevede of the Institute de Physique du Globe, Paris; Gabi Laske, Peter Davis, and Jon Berger of the Scripps Institute of Oceanography; Carla Braitenberg of the University of Trieste; Michel Van Camp of the Royal Observatory of Belgium; Xiang'e Lei, Heping Sun, and Houze Xu of the Chinese Academy of Sciences' Institute of Geodesy and Geophysics; and Severine Rosat of the National Astronomical Observatory of Japan.

The second paper contains web references to three animations that help to illustrate various aspects of this great earthquake:

Global movie of the vertical velocity wave field. The computation includes periods of 20 seconds and longer and shows a total duration of 3 hours. The largest amplitudes seen in this movie are the Rayleigh waves traveling around the globe. Global seismic stations are shown as yellow triangles.

Animation of the vertical velocity wave field in the source region. The computation includes periods of 12 seconds and longer with a total duration of about 13 minutes. As the rupture front propagates northward the wave-field gets compressed and amplified in the north and drawn out to the south. The radiation from patches of large slip shows up as circles that are offset from each other due to the rupture propagation (a Doppler-like effect).

Evolution of uplift and subsidence above the megathrust with time. The duration of the rupture is 550 seconds. This movie shows the history of the uplift at each point around the fault and, as a result, the dynamic part of the motion is visible (as wiggling contour lines). The simulation includes periods of 12 s and longer. The final frame of the movie shows the static field.

All animations were produced by Seismo Lab graduate student Vala Hjorleifsdottir with the assistance of Santiago Lombeyda at Caltech's Center for Advanced Computing Research. The simulations were performed on 150 nodes of Caltech's Division of Geological & Planetary Sciences' Dell cluster.

Robert Tindol

Seismic experiments provide new clues to earthquake wave directionality and growth speed

PASADENA, Calif.--In recent years, seismologists thought they were getting a handle on how an earthquake tends to rupture in a preferred direction along big strike-slip faults like the San Andreas. This is important because the direction of rupture has a profound influence on the distribution of ground shaking. But a new study could undermine their confidence a bit.

Reporting in the April 29 issue of the journal Science, researchers from the California Institute of Technology and Harvard University discuss new controlled laboratory experiments using dissimilar polymer plates to mimic Earth's crusts. The results show that the direction of rupture that controls the pattern of destruction is less predictable than recently thought.

The results explain puzzling results from last year's Parkfield earthquake, in which a northwestward rupture occurred. A southeastward rupture had been predicted on the basis of the two past earthquakes in the area and on numerical simulations. Also, during the recent large earthquakes in Turkey, some ruptures have occurred in the direction opposite to what happened in the past and are thought to involve unusually high speeds along that direction.

The phenomenon has to do with the basic ways rupture fronts (generating seismic waves) are propagated along a boundary between two materials with different wave speeds--an area of research that is yielding interesting and important results in the engineering laboratory.

The reason this is important is that geophysicists, knowing the wave speeds of the materials in different tectonic plates and the stresses acting on them, could someday have an improved ability to predict which areas along a major fault might be more powerfully hit. In effect, a better fundamental knowledge of the workings of Earth's plates could lead to a better ability to prepare for major earthquakes.

In the experiment, Caltech's von Kármán Professor of Aeronautics and Mechanical Engineering Ares Rosakis (the director of the Graduate Aeronautical Laboratories); his cross-campus colleague, Smits Professor of Geophysics Hiroo Kanamori; Professor James Rice of Harvard University; and Caltech grad student Kaiwen Xia, prepared polymer plates to mimic the effects of major strike-slip faults. These are faults in which two plates are rammed against each other by forces coming in at an angle, and which then spontaneously snap (or slide) to move sideways.

Because such a breaking of lab materials is similar on a smaller scale to the slipping of tectonic plates, the measurement of the waves in the polymer materials provides a good indication of what happens in earthquakes.

The team fixed the plates so that force was applied to them at an acute angle relative to the "fault" between them. The researchers then set off a small plasma explosion with a wire running to the center of the two polymer plates (the "hypocenter"), which caused the two plates to quickly slide apart, just as two tectonic plates would slide apart during an earthquake.

The clear polymer plates were made of two different materials especially selected so that their stress fringes could be photographed. In other words, the waves and rupture fronts that propagate through the system due to this "laboratory earthquake event" showed up as clearly visible waves on the photographic plates.

What's more, if the rupture fronts are super-shear, i.e., faster than the shear speed in the plates, they produce a shock-wave pattern that looks something like the Mach cone of a jet fighter breaking the sound barrier.

"Previously, it was generally thought that, if there is a velocity contrast, the rupture preferentially goes toward the direction of the slip in the low-velocity medium," explains Kanamori. In other words, if the lower-velocity medium is the plate shifting to the west, then the preferred direction of rupture would typically be to the west.

"What we see, when the force is small and the angle is small, is that we simultaneously generate ruptures to the west and to the east, and that the rupture fronts in both sides go with sub-shear speed," Rosakis explains. "But as the pressure increases substantially, the westward direction stays the same, but the other, eastward direction, becomes super-shear. This super-shear rupture speed is very close to the p-wave speed of the slower of the two materials."

To complicate matters even further, the results show that, when the experiment is done at forces below those required for super-shear, the directionality of the rupture is unpredictable. Both waves are at sub-shear speed, but waves in either direction can be devastating.

This, in effect, explains why the Parkfield earthquake last year ruptured in the direction opposite to that of past events. The experiment also strongly suggests that, if the earthquake had been sufficiently large, the super-shear waves would have traveled northwest, even though the preferred direction was southeast.

But the question remains whether super-shear is necessarily a bad thing, Kanamori says. "It's scientifically an interesting result, but I can't say what the exact implications are. It's at least important to be aware of these things.

"But it could also mean that earthquake ruptures are less predictable than ever," he adds.

Contact: Robert Tindol (626) 395-3631


Scientists Use fMRI to Catch Test Subjectsin the Act of Trusting One Another

PASADENA, Calif.--Who do you trust? The question may seem distinctly human--and limited only to "quality" humans, at that--but it turns out that trust is handled by the human brain in pretty much the same way that obtaining a food award is handled by the brain of an insect. In other words, it's all a lot more primitive than we think.

But there's more. The research also suggests that we can actually trust each other a fair amount of time without getting betrayed, and can do so just because of the biological creatures we are.

In a new milestone for neuroscience, experimenters at the California Institute of Technology and the Baylor College of Medicine for the first time have simultaneously scanned interacting brains using a new technique called "hyperscanning" brain imaging to probe how trust builds as subjects learn about one another. This new technique allowed the team to see for the first time how interacting brains influence each other as subjects played an economic game and built trusting relationships. The research has implications for further understanding the evolution of the brain and social behavior, and could also lead to new insights into maladies such as autism and schizophrenia, in which a person's interaction with others is severely compromised.

Reporting in Friday's issue of the journal Science, the Caltech and Baylor researchers describe the results they obtained by hooking up volunteers to functional magnetic resonance imaging (fMRI) machines in Pasadena and Houston, respectively. One volunteer in one locale would interact with another volunteer he or she did not know, and the two would play an economic game in which trustworthiness had to be balanced with the profit motive. At the time the volunteers were playing the game, their brain activity was continually monitored to see what was going on with their neurons.

According to Steve Quartz, associate professor of philosophy and director of the Social Cognitive Neuroscience Laboratory at Caltech, who led the Caltech effort and does much of his work on the social interactions of decision making by employing MRIs, the results show that trust involves a region of the brain known as the head of the caudate nucleus. As with all MRI images of the brain, the idea was to pick up evidence of a rush of blood to a specific part of the brain, which is taken to indicate evidence that the brain region is at that moment involved in mental activity.

The important finding, however, was not just that the caudate nucleus is involved, but that trust tended to shift backward in time as the game progressed. In other words, the expectation of a reward was intimately involved in an individual's assessment of trustworthiness in the other individual, and that the recipient tended to become more trusting prior to the reward coming--provided, of course, that there was no backstabbing.

Colin Camerer, the Axline Professor of Business Economics at Caltech and the other Caltech faculty author of the paper, adds that the study is also a breakthrough in showing that game theory continues to reward researchers who study human behavior.

"The theory about games such as the one we used in this study is developed around mathematics," Camerer says. "But a mathematical model of self-interest can be overly simplified. These results show that game theory can draw together the social and biological sciences for new and deeper understandings of human behavior. A better mathematical model will result."

The game is a multiround version of an economic exchange, in which one player (the "investor") is given $20 and told that he can either hold on to the money, or give some or all of it to the person on the other end of the game 1,500 miles away. The game is anonymous, and it is further assumed that the players will never meet each other, in order to keep other artifacts of social interaction from coming into play.

The person on the receiving end of the transaction (the "trustee") immediately has any gift that he receives tripled. The trustee can then give some or all of it back to the investor.

In ideal circumstances, the investor gives the entire $20 to the trustee, who then has his money tripled to $60 and then gives $30 back to the investor so that both have profited. That's assuming that greed hasn't made the trustee keep all the money for himself, of course, or that stinginess or lack of trust has persuaded the investor to keep the original investment all to himself. And this is the reason that trust is involved, and furthermore, the reason that there is brain activity during the course of the game for the experimenters to image.

The findings are that trust is delayed in the early rounds of the game (there are 10 in all), and that the players begin determining the costs and benefits of the interchange and soon begin anticipating the rewards before they are even bestowed. Before the game is finished, one player is showing brain activity in the head of the caudate nucleus that demonstrates he has an "intention to trust." Once the players know each other by reputation, they begin showing their intentions to trust about 14 seconds earlier than in the early rounds of the game.

The results are interesting on several levels, say Camerer and Quartz. For one, the results show the neuroscience of economic behavior.

"Neoclassical economics starts with the assumption that rational self-interest is the motivator of all our economic behavior," says Quartz. "The further assumption is that you can only get trust if you penalize people for non-cooperation, but these results show that you can build trust through social interaction, and question the traditional model of economic man."

"The results show that you can trust people for a fair amount of time, which contradicts the assumptions of classical economics," Camerer adds.

This is good news for us humans who must do business with each other, Quartz explains, because trustworthiness decreases the incidental costs. In other words, if we can trust people, then the costs of transactions are lower and simpler: there are fewer laws to encumber us, fewer lawyers to pay so as to ensure that all the documents pertaining to the deal are written in an airtight manner, and so on.

"It's the same as if you could have a business deal on a handshake," Quartz says. "You don't have to pay a bunch of lawyers to write up what you do at every step. Thus, trust is of great interest from the level of our everyday interactions all the way up to the economic prosperity of a country where trust is thought of in terms of social capital."

The research findings are also interesting in their similarity to classical conditioning experiments, in which a certain behavioral response is elicited through a reward. Just as a person is rewarded for trusting a trustworthy person--and begins trusting the person even earlier if the reward can honestly be expected--so, too, does a lab animal begin anticipating a food reward for pecking a mirror, tripping a switch, slobbering when a buzzer sounds, or running quickly through a maze.

"This is another striking demonstration of the brain re-using ancient centers for new purposes. That trust rides on top of the basic reward centers of the brain is something we had never anticipated and demonstrates how surprising brain imaging can be," Quartz notes.

And finally, the research could have implications for better understanding the neurology of individuals with severely compromised abilities to interact with other people, such as those afflicted with autism, borderline personality disorders, and schizophrenia. "The inability to predict others is a key facet of many mental disorders. These new results could help us better understand these conditions, and may ultimately guide new treatments," suggests Quartz.

The other authors of the article are Brooks King-Casas, Damon Tomlin and P. Read Montague (the lead author), all of the Baylor College of Medicine, and Cedric Anen of Caltech. The title of the paper is "Getting to Know You: Reputation and Trust in a Two-Person Economic Exchange."

Robert Tindol

Caltech Physics Team Invents DeviceFor Weighing Individual Molecules

PASADENA, Calif.-Physicists at the California Institute of Technology have created the first nanodevices capable of weighing individual biological molecules. This technology may lead to new forms of molecular identification that are cheaper and faster than existing methods, as well as revolutionary new instruments for proteomics.

According to Michael Roukes, professor of physics, applied physics, and bioengineering at Caltech and the founding director of Caltech's Kavli Nanoscience Institute, the technology his group has announced this week shows the immense potential of nanotechnology for creating transformational new instrumentation for the medical and life sciences. The new devices are at the nanoscale, he explains, since their principal component is significantly less than a millionth of a meter in width.

The Caltech devices are "nanoelectromechanical resonators"--essentially tiny tuning forks about a micron in length and a hundred or so nanometers wide that have a very specific frequency at which they vibrate when excited. Just as a bronze bell rings at a certain frequency based on its size, shape, and composition, these tiny tuning forks ring at their own fundamental frequency of mechanical vibration, although at such a high pitch that the "notes" are nearly as high in frequency as microwaves.

The researchers set up electronic circuitry to continually excite and monitor the frequency of the vibrating bar. Intermittently, a shutter is opened to expose the nanodevice to an atomic or molecular beam, in this case a very fine "spray" of xenon atoms or nitrogen molecules. Because the nanodevice is cooled, the molecules condense on the bar and add their mass to it, thereby lowering its frequency. In other words, the mechanical vibrations of the now slightly-more-massive nanodevice become slightly lower in frequency--just as thicker, heavier strings on an instrument sound notes that are lower than lighter ones.

Because frequency can be measured so precisely in physics labs, the researchers are then able to evaluate extremely subtle changes in mass of the nanodevice, and therefore, the weight of the added atoms or molecules.

Roukes says that their current generation of devices is sensitive to added mass at the level of a few zeptograms, which is few billionths of a trillionth of a gram. In their experiments this represents about thirty xenon atoms-- and it is the typical mass of an individual protein molecule.

"We hope to transform this chip-based technology into systems that are useful for picking out and identifying specific molecules, one-by-one--for example certain types of proteins secreted in the very early stages of cancer," Roukes says.

"The fundamental problem with identifying these proteins is that one must sort through millions of molecules to make the measurement. You need to be able to pick out the 'needle' from the 'haystack,' and that's hard to do, among other reasons because 95 percent of the proteins in the blood have nothing to do with cancer."

The new method might ultimately permit the creation of microchips, each possessing arrays of miniature mass spectrometers, which are devices for identifying molecules based on their weight. Today, high-throughput proteomics searches are often done at facilities possessing arrays of conventional mass spectrometers that fill an entire laboratory and can cost upwards of a million dollars each, Roukes adds. By contrast, future nanodevice-based systems should cost a small fraction of today's technology, and an entire massively-parallel nanodevice system will probably ultimately fit on a desktop.

Roukes says his group has technology in hand to push mass-sensing technology to even more sensitive levels, probably to the point that individual hydrogen atoms can be weighed. Such an intricately accurate method of determining atomic-scale masses would be quite useful in areas such as quantum optics, in which individual atoms are manipulated.

The next step for Roukes' team at Caltech is to engineer the interfaces so that individual biological molecules can be weighed. For this, the team will likely collaborate with various proteomics labs for side-by-side comparisons of already known information on the mass of biological molecules with results obtained with the new method.

Roukes announced the technology in Los Angeles on Wednesday, March 24, at a news conference during the annual American Physical Society convention. Further results will be published in the near future.

The Caltech team behind the zepto result included Dr. Ya-Tang Yang, former graduate student in applied physics, now at Applied Materials; Dr. Carlo Callegari, former postdoctoral associate, now a professor at the University of Graz, Austria; Xiaoli Feng, current graduate student in electrical engineering; and Dr. Kamil Ekinci former postdoctoral associate, now a professor at Boston University.

Robert Tindol

Scientists Discover What You Are Thinking

PASADENA, Calif. - By decoding signals coming from neurons, scientists at the California Institute of Technology have confirmed that an area of the brain known as the ventrolateral prefrontal cortex (vPF) is involved in the planning stages of movement, that instantaneous flicker of time when we contemplate moving a hand or other limb. The work has implications for the development of a neural prosthesis, a brain-machine interface that will give paralyzed people the ability to move and communicate simply by thinking.

By piggybacking on therapeutic work being conducted on epileptic patients, Daniel Rizzuto, a postdoctoral scholar in the lab of Richard Andersen, the Boswell Professor of Neuroscience, was able to predict where a target the patient was looking at was located, and also where the patient was going to move his hand. The work currently appears in the online version of Nature Neuroscience.

Most research in this field involves tapping into the areas of the brain that directly control motor actions, hoping that this will give patients the rudimentary ability to move a cursor, say, or a robotic arm with just their thoughts. Andersen, though, is taking a different tack. Instead of the primary motor areas, he taps into the planning stages of the brain, the posterior parietal and premotor areas.

Rizzuto looked at another area of the brain to see if planning could take place there as well. Until this work, the idea that spatial processing or movement planning took place in the ventrolateral prefrontal cortex has been a highly contested one. "Just the fact that these spatial signals are there is important," he says. "Based upon previous work in monkeys, people were saying this was not the case." Rizzuto's work is the first to show these spatial signals exist in humans.

Rizzuto took advantage of clinical work being performed by Adam Mamelak, a neurosurgeon at Huntington Memorial Hospital in Pasadena. Mamelak was treating three patients who suffered from severe epilepsy, trying to identify the brain areas where the seizures occurred and then surgically removing that area of the brain. Mamelak implanted electrodes into the vPF as part of this process.

"So for a couple of weeks these patients are lying there, bored, waiting for a seizure," says Rizzuto, "and I was able to get their permission to do my study, taking advantage of the electrodes that were already there." The patients watched a computer screen for a flashing target, remembered the target location through a short delay, then reached to that location. "Obviously a very basic task," he says.

"We were looking for the brain regions that may be contributing to planned movements. And what I was able to show is that a part of the brain called the ventrolateral prefrontal cortex is indeed involved in planning these movements." Just by analyzing the brain activity from the implanted electrodes using software algorithms that he wrote, Rizzuto was able to tell with very high accuracy where the target was located while it was on the screen, and also what direction the patient was going to reach to when the target wasn't even there.

Unlike most labs doing this type of research, Andersen's lab is looking at the planning areas of the brain rather than the primary motor area of the brain, because they believe the planning areas are less susceptible to damage. "In the case of a spinal cord injury," says Rizzuto, "communication to and from the primary motor cortex is cut off." But the brain still performs the computations associated with planning to move. "So if we can tap into the planning computations and decode where a person is thinking of moving," he says, then it just becomes an engineering problem--the person can be hooked up to a computer where he can move a cursor by thinking, or can even be attached to a robotic arm.

Andersen notes, "Dan's results are remarkable in showing that the human ventral prefrontal cortex, an area previously implicated in processing information about objects, also processes the intentions of subjects to make movements. This research adds ventral prefrontal cortex to the list of candidate brain areas for extracting signals for neural prosthetics applications."

In Andersen's lab, Rizzuto's goal is to take the technology they've perfected in animal studies to human clinical trials. "I've already met with our first paralyzed patient, and graduate student Hilary Glidden and I are now doing noninvasive studies to see how the brain reorganizes after paralysis," he says. If it does reorganize, he notes, all the technology that has been developed in non-paralyzed humans may not work. "This is why we think our approach may be better, because we already know that the primary motor area shows pathological reorganization and degeneration after paralysis. We think our area of the brain is going to reorganize less, if at all. After this we hope to implant paralyzed patients with electrodes so that they may better communicate with others and control their environment."

Exclude from News Hub: