Scientists Use fMRI to Catch Test Subjectsin the Act of Trusting One Another

PASADENA, Calif.--Who do you trust? The question may seem distinctly human--and limited only to "quality" humans, at that--but it turns out that trust is handled by the human brain in pretty much the same way that obtaining a food award is handled by the brain of an insect. In other words, it's all a lot more primitive than we think.

But there's more. The research also suggests that we can actually trust each other a fair amount of time without getting betrayed, and can do so just because of the biological creatures we are.

In a new milestone for neuroscience, experimenters at the California Institute of Technology and the Baylor College of Medicine for the first time have simultaneously scanned interacting brains using a new technique called "hyperscanning" brain imaging to probe how trust builds as subjects learn about one another. This new technique allowed the team to see for the first time how interacting brains influence each other as subjects played an economic game and built trusting relationships. The research has implications for further understanding the evolution of the brain and social behavior, and could also lead to new insights into maladies such as autism and schizophrenia, in which a person's interaction with others is severely compromised.

Reporting in Friday's issue of the journal Science, the Caltech and Baylor researchers describe the results they obtained by hooking up volunteers to functional magnetic resonance imaging (fMRI) machines in Pasadena and Houston, respectively. One volunteer in one locale would interact with another volunteer he or she did not know, and the two would play an economic game in which trustworthiness had to be balanced with the profit motive. At the time the volunteers were playing the game, their brain activity was continually monitored to see what was going on with their neurons.

According to Steve Quartz, associate professor of philosophy and director of the Social Cognitive Neuroscience Laboratory at Caltech, who led the Caltech effort and does much of his work on the social interactions of decision making by employing MRIs, the results show that trust involves a region of the brain known as the head of the caudate nucleus. As with all MRI images of the brain, the idea was to pick up evidence of a rush of blood to a specific part of the brain, which is taken to indicate evidence that the brain region is at that moment involved in mental activity.

The important finding, however, was not just that the caudate nucleus is involved, but that trust tended to shift backward in time as the game progressed. In other words, the expectation of a reward was intimately involved in an individual's assessment of trustworthiness in the other individual, and that the recipient tended to become more trusting prior to the reward coming--provided, of course, that there was no backstabbing.

Colin Camerer, the Axline Professor of Business Economics at Caltech and the other Caltech faculty author of the paper, adds that the study is also a breakthrough in showing that game theory continues to reward researchers who study human behavior.

"The theory about games such as the one we used in this study is developed around mathematics," Camerer says. "But a mathematical model of self-interest can be overly simplified. These results show that game theory can draw together the social and biological sciences for new and deeper understandings of human behavior. A better mathematical model will result."

The game is a multiround version of an economic exchange, in which one player (the "investor") is given $20 and told that he can either hold on to the money, or give some or all of it to the person on the other end of the game 1,500 miles away. The game is anonymous, and it is further assumed that the players will never meet each other, in order to keep other artifacts of social interaction from coming into play.

The person on the receiving end of the transaction (the "trustee") immediately has any gift that he receives tripled. The trustee can then give some or all of it back to the investor.

In ideal circumstances, the investor gives the entire $20 to the trustee, who then has his money tripled to $60 and then gives $30 back to the investor so that both have profited. That's assuming that greed hasn't made the trustee keep all the money for himself, of course, or that stinginess or lack of trust has persuaded the investor to keep the original investment all to himself. And this is the reason that trust is involved, and furthermore, the reason that there is brain activity during the course of the game for the experimenters to image.

The findings are that trust is delayed in the early rounds of the game (there are 10 in all), and that the players begin determining the costs and benefits of the interchange and soon begin anticipating the rewards before they are even bestowed. Before the game is finished, one player is showing brain activity in the head of the caudate nucleus that demonstrates he has an "intention to trust." Once the players know each other by reputation, they begin showing their intentions to trust about 14 seconds earlier than in the early rounds of the game.

The results are interesting on several levels, say Camerer and Quartz. For one, the results show the neuroscience of economic behavior.

"Neoclassical economics starts with the assumption that rational self-interest is the motivator of all our economic behavior," says Quartz. "The further assumption is that you can only get trust if you penalize people for non-cooperation, but these results show that you can build trust through social interaction, and question the traditional model of economic man."

"The results show that you can trust people for a fair amount of time, which contradicts the assumptions of classical economics," Camerer adds.

This is good news for us humans who must do business with each other, Quartz explains, because trustworthiness decreases the incidental costs. In other words, if we can trust people, then the costs of transactions are lower and simpler: there are fewer laws to encumber us, fewer lawyers to pay so as to ensure that all the documents pertaining to the deal are written in an airtight manner, and so on.

"It's the same as if you could have a business deal on a handshake," Quartz says. "You don't have to pay a bunch of lawyers to write up what you do at every step. Thus, trust is of great interest from the level of our everyday interactions all the way up to the economic prosperity of a country where trust is thought of in terms of social capital."

The research findings are also interesting in their similarity to classical conditioning experiments, in which a certain behavioral response is elicited through a reward. Just as a person is rewarded for trusting a trustworthy person--and begins trusting the person even earlier if the reward can honestly be expected--so, too, does a lab animal begin anticipating a food reward for pecking a mirror, tripping a switch, slobbering when a buzzer sounds, or running quickly through a maze.

"This is another striking demonstration of the brain re-using ancient centers for new purposes. That trust rides on top of the basic reward centers of the brain is something we had never anticipated and demonstrates how surprising brain imaging can be," Quartz notes.

And finally, the research could have implications for better understanding the neurology of individuals with severely compromised abilities to interact with other people, such as those afflicted with autism, borderline personality disorders, and schizophrenia. "The inability to predict others is a key facet of many mental disorders. These new results could help us better understand these conditions, and may ultimately guide new treatments," suggests Quartz.

The other authors of the article are Brooks King-Casas, Damon Tomlin and P. Read Montague (the lead author), all of the Baylor College of Medicine, and Cedric Anen of Caltech. The title of the paper is "Getting to Know You: Reputation and Trust in a Two-Person Economic Exchange."

Writer: 
Robert Tindol
Writer: 

Caltech Physics Team Invents DeviceFor Weighing Individual Molecules

PASADENA, Calif.-Physicists at the California Institute of Technology have created the first nanodevices capable of weighing individual biological molecules. This technology may lead to new forms of molecular identification that are cheaper and faster than existing methods, as well as revolutionary new instruments for proteomics.

According to Michael Roukes, professor of physics, applied physics, and bioengineering at Caltech and the founding director of Caltech's Kavli Nanoscience Institute, the technology his group has announced this week shows the immense potential of nanotechnology for creating transformational new instrumentation for the medical and life sciences. The new devices are at the nanoscale, he explains, since their principal component is significantly less than a millionth of a meter in width.

The Caltech devices are "nanoelectromechanical resonators"--essentially tiny tuning forks about a micron in length and a hundred or so nanometers wide that have a very specific frequency at which they vibrate when excited. Just as a bronze bell rings at a certain frequency based on its size, shape, and composition, these tiny tuning forks ring at their own fundamental frequency of mechanical vibration, although at such a high pitch that the "notes" are nearly as high in frequency as microwaves.

The researchers set up electronic circuitry to continually excite and monitor the frequency of the vibrating bar. Intermittently, a shutter is opened to expose the nanodevice to an atomic or molecular beam, in this case a very fine "spray" of xenon atoms or nitrogen molecules. Because the nanodevice is cooled, the molecules condense on the bar and add their mass to it, thereby lowering its frequency. In other words, the mechanical vibrations of the now slightly-more-massive nanodevice become slightly lower in frequency--just as thicker, heavier strings on an instrument sound notes that are lower than lighter ones.

Because frequency can be measured so precisely in physics labs, the researchers are then able to evaluate extremely subtle changes in mass of the nanodevice, and therefore, the weight of the added atoms or molecules.

Roukes says that their current generation of devices is sensitive to added mass at the level of a few zeptograms, which is few billionths of a trillionth of a gram. In their experiments this represents about thirty xenon atoms-- and it is the typical mass of an individual protein molecule.

"We hope to transform this chip-based technology into systems that are useful for picking out and identifying specific molecules, one-by-one--for example certain types of proteins secreted in the very early stages of cancer," Roukes says.

"The fundamental problem with identifying these proteins is that one must sort through millions of molecules to make the measurement. You need to be able to pick out the 'needle' from the 'haystack,' and that's hard to do, among other reasons because 95 percent of the proteins in the blood have nothing to do with cancer."

The new method might ultimately permit the creation of microchips, each possessing arrays of miniature mass spectrometers, which are devices for identifying molecules based on their weight. Today, high-throughput proteomics searches are often done at facilities possessing arrays of conventional mass spectrometers that fill an entire laboratory and can cost upwards of a million dollars each, Roukes adds. By contrast, future nanodevice-based systems should cost a small fraction of today's technology, and an entire massively-parallel nanodevice system will probably ultimately fit on a desktop.

Roukes says his group has technology in hand to push mass-sensing technology to even more sensitive levels, probably to the point that individual hydrogen atoms can be weighed. Such an intricately accurate method of determining atomic-scale masses would be quite useful in areas such as quantum optics, in which individual atoms are manipulated.

The next step for Roukes' team at Caltech is to engineer the interfaces so that individual biological molecules can be weighed. For this, the team will likely collaborate with various proteomics labs for side-by-side comparisons of already known information on the mass of biological molecules with results obtained with the new method.

Roukes announced the technology in Los Angeles on Wednesday, March 24, at a news conference during the annual American Physical Society convention. Further results will be published in the near future.

The Caltech team behind the zepto result included Dr. Ya-Tang Yang, former graduate student in applied physics, now at Applied Materials; Dr. Carlo Callegari, former postdoctoral associate, now a professor at the University of Graz, Austria; Xiaoli Feng, current graduate student in electrical engineering; and Dr. Kamil Ekinci former postdoctoral associate, now a professor at Boston University.

Writer: 
Robert Tindol
Writer: 

Scientists Discover What You Are Thinking

PASADENA, Calif. - By decoding signals coming from neurons, scientists at the California Institute of Technology have confirmed that an area of the brain known as the ventrolateral prefrontal cortex (vPF) is involved in the planning stages of movement, that instantaneous flicker of time when we contemplate moving a hand or other limb. The work has implications for the development of a neural prosthesis, a brain-machine interface that will give paralyzed people the ability to move and communicate simply by thinking.

By piggybacking on therapeutic work being conducted on epileptic patients, Daniel Rizzuto, a postdoctoral scholar in the lab of Richard Andersen, the Boswell Professor of Neuroscience, was able to predict where a target the patient was looking at was located, and also where the patient was going to move his hand. The work currently appears in the online version of Nature Neuroscience.

Most research in this field involves tapping into the areas of the brain that directly control motor actions, hoping that this will give patients the rudimentary ability to move a cursor, say, or a robotic arm with just their thoughts. Andersen, though, is taking a different tack. Instead of the primary motor areas, he taps into the planning stages of the brain, the posterior parietal and premotor areas.

Rizzuto looked at another area of the brain to see if planning could take place there as well. Until this work, the idea that spatial processing or movement planning took place in the ventrolateral prefrontal cortex has been a highly contested one. "Just the fact that these spatial signals are there is important," he says. "Based upon previous work in monkeys, people were saying this was not the case." Rizzuto's work is the first to show these spatial signals exist in humans.

Rizzuto took advantage of clinical work being performed by Adam Mamelak, a neurosurgeon at Huntington Memorial Hospital in Pasadena. Mamelak was treating three patients who suffered from severe epilepsy, trying to identify the brain areas where the seizures occurred and then surgically removing that area of the brain. Mamelak implanted electrodes into the vPF as part of this process.

"So for a couple of weeks these patients are lying there, bored, waiting for a seizure," says Rizzuto, "and I was able to get their permission to do my study, taking advantage of the electrodes that were already there." The patients watched a computer screen for a flashing target, remembered the target location through a short delay, then reached to that location. "Obviously a very basic task," he says.

"We were looking for the brain regions that may be contributing to planned movements. And what I was able to show is that a part of the brain called the ventrolateral prefrontal cortex is indeed involved in planning these movements." Just by analyzing the brain activity from the implanted electrodes using software algorithms that he wrote, Rizzuto was able to tell with very high accuracy where the target was located while it was on the screen, and also what direction the patient was going to reach to when the target wasn't even there.

Unlike most labs doing this type of research, Andersen's lab is looking at the planning areas of the brain rather than the primary motor area of the brain, because they believe the planning areas are less susceptible to damage. "In the case of a spinal cord injury," says Rizzuto, "communication to and from the primary motor cortex is cut off." But the brain still performs the computations associated with planning to move. "So if we can tap into the planning computations and decode where a person is thinking of moving," he says, then it just becomes an engineering problem--the person can be hooked up to a computer where he can move a cursor by thinking, or can even be attached to a robotic arm.

Andersen notes, "Dan's results are remarkable in showing that the human ventral prefrontal cortex, an area previously implicated in processing information about objects, also processes the intentions of subjects to make movements. This research adds ventral prefrontal cortex to the list of candidate brain areas for extracting signals for neural prosthetics applications."

In Andersen's lab, Rizzuto's goal is to take the technology they've perfected in animal studies to human clinical trials. "I've already met with our first paralyzed patient, and graduate student Hilary Glidden and I are now doing noninvasive studies to see how the brain reorganizes after paralysis," he says. If it does reorganize, he notes, all the technology that has been developed in non-paralyzed humans may not work. "This is why we think our approach may be better, because we already know that the primary motor area shows pathological reorganization and degeneration after paralysis. We think our area of the brain is going to reorganize less, if at all. After this we hope to implant paralyzed patients with electrodes so that they may better communicate with others and control their environment."

Writer: 
MW
Exclude from News Hub: 
No

New study provides insights into the brain's remembrance of emotional events

PASADENA, Calif.--Those of us who are old enough to remember the Kennedy assassination are usually able to remember the initial announcement almost as if it's a movie running in our heads. That's because there is a well-known tendency for people to have enhanced memory of a highly emotional event, and further, a memory that focuses especially on the "gist" of the event.

In other words, people who remember the words "President Kennedy is dead" will remember the news extraordinarily well. But at the same time, they will likely have no more recollection of extraneous details such as what they were wearing or what they were doing an hour before hearing the news than they would for any other day in 1963. Neurobiologists have known both these phenomena to be true for some time, and a new study now explains how the brain achieves this effect.

In the new study, researchers from the California Institute of Technology and the University of Iowa College of Medicine show how the recollections of gist and details of emotional events are related to specific parts of the brain. In an article appearing in this month's Nature Neuroscience, the authors report that patients with damage to an area of the brain known as the amygdala are unable to remember the gist of an emotional stimulus, even though there is nothing otherwise faulty in their memory. The study shows that the amygdala somehow focuses the brain's processing resources on the gist of an emotional event.

"During a highly emotional event, like the Kennedy assassination, 9/11, or the Challenger accident, you remember the gist much better than you would remember the gist of some other neutral event," says Ralph Adolphs, a professor of psychology and neuroscience at Caltech and lead author of the study. "But people with damage to the amygdala have a failure to put this special tag on the gist of emotional memories. In other words, they remember the gist of an emotional event no better than the gist of a neutral event."

To test their hypothesis, Adolphs and his colleagues at the University of Iowa College of Medicine showed a group of normal control subjects and a group of test subjects known to have amygdala damage a series of pictures accompanied by fabricated stories. One type of series involved fairly mundane episodes in which, for example, a family was depicted driving somewhere and returning home uneventfully. But in the other series, the story would relate a tragic event, such as the family having been involved in a fatal auto accident on the way home, accompanied with gruesome pictures of amputated limbs.

As expected, the normal control subjects had enhanced recall of the emotional stories and pictures, and far more vague recall of the mundane stories. The test subjects with amygdala damage, however, possessed no better recall of the gist of the emotional story than of the mundane stories. On the other hand, both the control group and the group with amygdala damage showed about equal ability to remember details from stories with no emotional content.

The findings suggest that the amygdala is responsible for our ability to have strong recollections of emotional events, Adolphs says. Further study could point to how the amygdala is involved in impaired real-life emotional memories seen in patients with post-traumatic stress disorder and Alzheimer's disease, he adds.

The other authors of the article are Daniel Tranel and Tony W. Buchanan, both of the University of Iowa College of Medicine's Department of Neurology.

Writer: 
Robert Tindol
Writer: 

Negative Impacts of Dam Construction on Human Populations Can Be Reduced, Author Says

PASADENA, Calif.--Despite the adverse impacts of large dam construction on ecosystems and human settlements, more and more dams are likely to be built in the 21st century wherever there is a need to store water for irrigated agriculture, urban water supplies, and power generation. But world societies and governments would do well to evaluate the consequences of dam construction as an integral part of the planning process, a leading authority writes in a new book.

The book, The Future of Large Dams, is the latest work by California Institute of Technology anthropologist Thayer Scudder, who is arguably the world's foremost expert on the impact of dam construction on human societies living along major world rivers. Published by Earthscan, the book argues that the early analysis by affected stakeholders of the impact of a dam's proposed construction is a worthwhile undertaking. And not only is it worthwhile, but also is quite possible to accomplish with established research techniques.

According to Scudder, large dams are a "flawed yet still necessary development option." Flaws include both the shortcomings of the dam itself as well as ecological and social impacts. In terms of the former, Scudder says that dams, on the average, can be expected to get clogged with sediment at a rate of about 0.5 to 1 percent per year. And in terms of the latter, changing habitat caused by the flooding of land behind and below dams is certain to change the habits of nearby humans and animals alike--if not devastate both.

"Although dams have their problems, they're unfortunately still necessary because of the growing needs of humans for water storage," says Scudder. "That's the dilemma."

Given that governments throughout the world-- the United States included--will continue to dam rivers, Scudder says it's important to take into consideration that hundreds of millions of people have been adversely affected by dams in the last century. Somewhere between 40 and 80 million people have been forcibly relocated by the flooding of the land on which they live to create the reservoirs above the dams. Furthermore, even larger numbers of people have had their lives and livelihoods disrupted by the change of the river flow below dams.

"Lots of people in many places in the world are dependent on the natural flow of rivers, and the consequences can be the sort of things you might not normally even take into account," he says. "For example, a settlement that depends on an annual flooding of agricultural land when the river rises can be wiped out if the regulated flow of the dam causes the annual flooding to cease."

Scudder, in fact, wrote his doctoral dissertation many years ago on such an instance, in which the construction of a dam obliterated the most productive component of an upstream farming system.

"But the book argues that, despite these adverse impacts, there are state-of-the-art ways of addressing them," he says. "For example, if local populations downstream have been depending on an annual inundation of an agricultural flood plain, then the authorities in charge and other stakeholders should consider a controlled release of water that recreates the flooding conditions. Experiments have been done with coordinating hydropower generation and flood recession irrigation needs with the release of 'environmental flows'--that is, releases of water to protect habitats and communities. This approach has been tried in several African countries, and research has shown in other cases that managed floods would be a 'win-win' option."

In general, the way to make dams work for humans everywhere, Scudder suggests, is to address the social and environmental impacts both downstream and upstream of any dam project before the structure is even built, and to evaluate the situations in river basins where dams have already been constructed.

Finally, the political and institutional consideration of dam construction should be addressed, Scudder says. Too often, a dam project is undertaken at a specific locale because of its political expedience, and this is not the best way to minimize the negative human and ecological impact. Restructuring governmental departments that oversee dams can also maximize negative environmental, agricultural, or other impacts.

"We should all be able to benefit from the dams that are to be built in the future rather than suffer from them," he concludes.

Review copies of the book are available from Earthscan Sales and Marketing Administrator Michael Fell by e-mailing him at Michael.Fell@earthscan.co.uk or calling +44 (0)20 7121 3154.

 

Writer: 
Robert Tindol
Writer: 

Neuroscientists discover that humans evaluate emotions by looking at the eyes

PASADENA, Calif.--If your mother ever told you to watch out for strangers with shifty eyes, you can start taking her advice to heart. Neuroscientists exploring a region of the brain associated with the recognition of emotional expressions have concluded that it is the eye region that we scan when our brains process information about other people's emotions.

Reporting in the January 6 issue of the journal Nature, California Institute of Technology neuroscientist Ralph Adolphs and colleagues at the University of Iowa, University of Montreal, and University of Glasgow describe new results they have obtained with a patient suffering from a rare genetic malady that has destroyed her brain's amygdala. The amygdala are found in each side of the brain in the medial temporal lobe and are known to process information about facial emotions. The patient, who has been studied by the researchers at the University of Iowa for a decade, shows an intriguing inability to recognize fear and other emotions from facial expressions.

"The fact that the amygdala is involved in fear recognition has been borne out by a large number of studies," explains Adolphs. "But until now the mechanisms through which amygdala damage compromises fear recognition have not been identified."

Although Adolphs and his colleagues have known for years that the woman is unable to recognize fear from facial expressions in others, they didn't know until recently that her problem was an inability to focus on the eye region of others when judging their emotions. They discovered this by carefully recording the way her eyes focused on pictures of faces.

In normal test subjects, a person's eyes dart from area to area of a face in a quick, largely unconscious program of evaluating facial expressions to recognize emotions. The woman, by contrast, tended to stare straight ahead at the photographs, displaying no tendency to regard the eyes at all. As a result, she was nonjudgmental in her interpersonal dealings, often trusting even those individuals who didn't deserve the benefit of the doubt.

However, the good news is that the woman could be trained to look at the eyes in the photographs, even though she had no natural inclination to do so. When she deliberately looked at the eyes upon being instructed to do so, she had a normal ability to recognize fear in the faces.

According to Adolphs, the study is a step forward in better understanding the human brain's perceptual mechanisms, and also a practical key in possible therapies to help certain patients with defective emotional perception lead more normal lives.

In terms of the former, Adolphs says that the amygdala's role in fear recognition will probably be better understood with additional research such as that now going on in Caltech's new magnetic resonance imaging lab. "It would be naïve to ascribe these findings to one single brain structure," he says. "Many parts of the brain work together, so a more accurate picture will probably relate cognitive abilities to a network of brain structures.

"Therefore, the things the amygdala do together with other parts of the brain are going to be a complex matter that will take a long time to figure out."

However, the very fact that the woman could be trained to evaluate fear in other people's faces is encouraging news for individuals with autism and other maladies that cause problems in their recognizing other people's emotions, Adolphs says.

"Maybe people with autism could be helped if they were trained how to look at the world and how to look at people's faces to improve their social functioning," he says.

Adolphs is a professor of psychology and neuroscience at Caltech, and holds a joint appointment at the University of Iowa College of Medicine. The other authors of the paper are Frederic Gosselin, Tony Buchanan, Daniel Tranel, Philippe Schyns, and Antonio Damasio.

Writer: 
Robert Tindol
Writer: 

The Science behind the Aceh Earthquake

PASADENA, Calif. - Kerry Sieh, the Robert P. Sharp Professor of Geology at the California Institute of Technology and a member of Caltech's Tectonics Observatory, has conducted extensive research on both the Sumatran fault and the Sumatran subduction zone. Below, Sieh provides scientific background and context for the December 26, 2004 earthquake that struck Aceh, Indonesia.

The earthquake that struck northern Sumatra on December 26, 2004, was the world's largest earthquake since the great (magnitude 9.2) Alaskan earthquake of 1964. The great displacements of the sea floor associated with the earthquake produced exceptionally large tsunami waves that spread death and destruction throughout the Bay of Bengal, from Northern Sumatra to Thailand, Sri Lanka, and India.

The earthquake originated along the boundary between the Indian/Australian and Eurasian tectonic plates, which arcs 5,500 kilometers (3,400 miles) from Myanmar past Sumatra and Java toward Australia see Figure 1. Near Sumatra, the Indian/Australian plate is moving north-northeast at about 60 millimeters (2.4 in.) per year with respect to Southeast Asia. The plates meet 5 kilometers (3 miles) beneath the sea at the Sumatran Trench, on the floor of the Indian Ocean Figure 2. The trench runs roughly parallel to the western coast of Sumatra, about 200 kilometers (125 miles) offshore. At the trench, the Indian/Australian plate is being subducted; that is, it is diving into the earth's interior and being overridden by Southeast Asia. The contact between the two plates is an earthquake fault, sometimes called a "megathrust." Figure 3 The two plates do not glide smoothly past each other along the megathrust but move in "stick-slip" fashion. This means that the megathrust remains locked for centuries, and then slips suddenly a few meters, generating a large earthquake.

History reveals that the subduction megathrust does not rupture all at once along the entire 5,500-kilometer plate boundary. The U.S. Geological Survey reports that the rupture began just north of Simeulue Island Figure 4. From the analysis of seismograms, Caltech seismologist Chen Ji has found that from this origin point, the major rupture propagated northward about 400 kilometers (249 miles) along the megathrust at about two kilometers per second. By contrast, the extent of major aftershocks suggests that the rupture extended about a thousand kilometers (620 miles) northward to the vicinity of the Andaman Islands. During the rupture, the plate on which Sumatra and the Andaman Islands sit lurched many meters westward over the Indian plate.

The section of the subduction megathrust that runs from Myanmar southward across the Andaman Sea, then southeastward off the west coast of Sumatra, has produced many large and destructive earthquakes in the past two centuries Figure 5. In 1833, rupture of a long segment offshore central Sumatra produced an earthquake of about magnitude 8.7 and attendant large tsunamis. In 1861, a section just north of the equator produced a magnitude 8.5 earthquake and large tsunamis. Other destructive historical earthquakes and tsunamis have been smaller. A segment to the north of the Nicobar Islands ruptured in 1881, generating an earthquake with an estimated magnitude of 7.9. A short segment farther to the south, under the Batu Islands, ruptured in 1935 (magnitude 7.7). A segment under the Enganno Island ruptured in 2000 (magnitude 7.8), and a magnitude 7.4 precursor to the recent earthquake occurred in late 2002, under Simeulue Island.

This recent earthquake was generated by the seismic rupture of only the northernmost portion of the Sumatran section of the megathrust. Therefore, the fact that most of the other part of the section has generated few great earthquakes in more than a hundred years is worrisome. Paleoseismic research has shown that seismic ruptures like the one in 1833, for example, recur about every two centuries. Thus, other parts within the section of this fault should be considered dangerous over the next few decades.

During rupture of a subduction megathrust, the portion of Southeast Asia that overlies the megathrust jumps westward (toward the trench) by several meters, and upward by 1-3 meters (3-10 feet). This raises the overlying ocean, so that there is briefly a "hill" of water about 1-3 meters high overlying the rupture. The flow of water downward from this hill triggers a series of broad ocean waves that are capable of traversing the entire Bay of Bengal. When the waves reach shallow water they slow down and increase greatly in height--up to 10 meters (32 feet) or so in the case of the December 26 earthquake--and thus are capable of inundating low-lying coastal areas.

Although the tsunami waves subside in a short period of time, some coastal areas east of the megathrust sink by a meter or so, leading to permanent swamping of previously dry, habitable ground. Islands above the megathrust rise 1 to 3 meters, so that shallow coral reefs emerge from the sea. Such long-term changes resulting from the December 26 earthquake will be mapped in the next few months by Indonesian geologists and their colleagues.

Writer: 
MW
Exclude from News Hub: 
No

More Stormy Weather on Titan

PASADENA, Calif.— Titan, it turns out, may be a very stormy place. In 2001, a group of astronomers led by Henry Roe, now a postdoctoral scholar at the California Institute of Technology, discovered methane clouds near the south pole of Saturn's largest moon, resolving a debate about whether such clouds exist amid the haze of its atmosphere.

Now Roe and his colleagues have found similar atmospheric disturbances at Titan's temperate mid-latitudes, about halfway between the equator and the poles. In a bit of ironic timing, the team made its discovery using two ground-based observatories, the Gemini North and Keck 2 telescopes on Mauna Kea, in Hawaii, in the months before the Cassini spacecraft arrived at Saturn and Titan. The work will appear in the January 1, 2005, issue of the Astrophysical Journal.

"We were fortunate to catch these new mid-latitude clouds when they first appeared in late 2003 and early 2004," says Roe, who is a National Science Foundation Astronomy and Astrophysics Postdoctoral Scholar at Caltech. Much of the credit goes to the resolution and sensitivity of the two ground-based telescopes and their use of adaptive optics, in which a flexible mirror rapidly compensates for the distortions caused by turbulence in the Earth's atmosphere. These distortions are what cause the well-known twinkling of the stars. Using adaptive optics, details as small as 300 kilometers across can be distinguished despite the enormous distance of Titan (1.3 billion kilometers). That's equivalent to reading an automobile license plate from 100 kilometers away.

Still to be determined, though, is the cause of the clouds. According to Chad Trujillo, a former Caltech postdoctoral scholar and now a scientist at the Gemini Observatory, Titan's weather patterns can be stable for many months, with only occasional bursts of unusual activity like these recently discovered atmospheric features.

Like Earth, Titan's atmosphere is mostly nitrogen. Unlike Earth, Titan is inhospitable to life due to the lack of atmospheric oxygen and to its extremely cold surface temperatures (-297 degrees Fahrenheit). Along with nitrogen, Titan's atmosphere also contains a significant amount of methane, which may be the cause of the mid-latitude clouds.

Conditions on Earth allow water to exist in liquid, solid, or vapor states, depending on localized temperatures and pressures. The phase changes of water between these states are an important factor in the formation of weather in our atmosphere. But on Titan, methane rules. The moon's atmosphere is so cold that any water is frozen solid, but methane can move between liquid, solid, and gaseous states. This leads to a methane meteorological cycle on Titan that is similar to the water-based weather cycle on Earth.

While the previously discovered south polar clouds are thought to be a result of solar surface heating, the new mid-latitude clouds cannot be formed by the same mechanism. One possible explanation for the new clouds is a seasonal shift in the global winds. More likely, says Roe, surface activity might be disturbing the atmosphere at the mid-latitude location. Geysers of methane slush may be brewing up from below, or a warm spot on Titan's surface may be heating the atmosphere. Cryovolcanism--volcanic activity that spews an icy mix of chemicals--is another mechanism that could cause disturbances. Hints about what is happening on this frigid world could be obtained as the Huygens probe, which will be released from Cassini on Christmas day, drops through Titan's atmosphere in January 2005.

If the clouds are being caused by these geological conditions, says Roe, they should stay at the observed 40-degree latitude and repeatedly occur above the same surface feature or features. Meanwhile, if a seasonal shift in the winds is forming the clouds then their locations should move northward as Titan's season progresses into southern summer. "Continued observations with the Gemini and Keck telescopes will easily distinguish between these two scenarios," says Roe.

The Gemini observatory is operated by the Association of Universities for Research in Astronomy under a cooperative agreement with the National Science Foundation, involving the National Optical Astronomy Observatory, AURA, and the NSF as the U.S. partner. The W.M. Keck Observatory is operated by the California Association for Research in Astronomy, a scientific partnership between the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration.

Writer: 
JP
Writer: 
Exclude from News Hub: 
No

Physicists at Caltech, UT Austin ReportBose-Einstein Condensation of Cold Excitons

PASADENA, Calif.-Bose-Einstein condensates are enigmatic states of matter in which huge numbers of particles occupy the same quantum state and, for all intents and purposes, lose their individual identity. Predicted long ago by Albert Einstein and Satyendranath Bose, these bizarre condensates have recently become one of the hottest topics in physics research worldwide.

Now, physicists at the California Institute of Technology and the University of Texas at Austin have created a sustained Bose-Einstein condensate of excitons, unusual particles that inhabit solid semiconductor materials. By contrast, most recent work on the phenomenon has focused on supercooled dilute gases, in which the freely circulating atoms of the gas are reduced to a temperature where they all fall into the lowest-energy quantum state. The new Caltech-UT Austin results are being published this week in the journal Nature.

According to Jim Eisenstein, who is the Roshek Professor of Physics at Caltech and co-lead author of the paper, exciton condensation was first predicted over 40 years ago but has remained undiscovered until now because the excitons usually decay in about a billionth of a second. In this new work, the researchers created stable excitons, which consist of an electron in one layer of a sandwich-like semiconductor structure bound to a positively charged "hole" in an adjacent layer. A hole is the vacancy created when an electron is removed from a material.

Bound together, the electron and hole form a "boson," a type of particle that does not mind crowding together with other similar bosons into the same quantum state. The other type of particle in the universe, "fermions," include individual protons and electrons and neutrons. Only one fermion is allowed to occupy a given quantum state.

The picture is complex, but if one imagines two layers of material, one containing some electrons, the other completely empty, the results are somewhat easier to visualize. Begin by transferring half of the electrons from the full layer to the empty one. The resulting situation is equivalent to a layer of electrons in parallel with a layer of holes. And because the electron has a negative charge, the taking away of an electron means that the hole in which it once existed has a positive charge.

The difficult thing about the procedure is that the layers have to be positioned just right and a large magnetic field has to be applied just right in order to avoid swamping the subtle binding of the electron and hole by other forces in the system. The magnetic field is also essential for stabilizing the excitons and preventing their decay.

Eisenstein says that the simplest experiment consists of sending electrical currents through the two layers in opposite directions. The "smoking gun" for exciton condensation is the absence of the ubiquitous sideways force experienced by charged particles moving in magnetic fields. Excitons, which have no net charge, should not feel such a force.

One mystery that remains is the tendency of the excitons to dump a small amount of energy when they move. "We find that, as we go toward lower temperatures, energy dissipation does become smaller and smaller," Eisenstein says. "But we expected no energy dissipation at all.

"Therefore, this is not really an ideal superfluid--so far it is at best a bad one."

The other author of the paper is Allan MacDonald, who holds the Sid W. Richardson Foundation Regents Chair in physics at UT Austin and is a specialist in theoretical condensed matter physics.

Writer: 
Robert Tindol
Writer: 

Caltech computer scientists embed computation in a DNA crystal to create microscopic patterns

PASADENA, Calif.--In a demonstration that holds promise for future advances in nanotechnology, California Institute of Technology computer scientists have succeeded in building a DNA crystal that computes as it grows. As the computation proceeds, it creates a triangular fractal pattern in the DNA crystal.

This is the first time that a computation has been embedded in the growth of any crystal, and the first time that computation has been used to create a complex microscopic pattern. And, the researchers say, it is one step in the dream of nanoscientists to master construction techniques at the molecular level.

Reporting in the December issue of the journal Public Library of Science (PLoS) Biology, Caltech assistant professor Erik Winfree and his colleagues show that DNA "tiles" can be programmed to assemble themselves into a crystal bearing a pattern of progressively smaller "triangles within triangles," known as a Sierpinski triangle. This fractal pattern is more complex than patterns found in natural crystals because it never repeats. Natural crystals, by contrast, all bear repeating patterns like those commonly found in the tiling of a bathroom floor. And, because each DNA tile is a tiny knot of DNA with just 150 base pairs (an entire human genome has some 3 billion), the resulting Sierpinski triangles are microscopic. The Winfree team reports growing micron-size DNA crystals (about a hundredth the width of a human hair) that contain numerous Sierpinski triangles.

A key feature of the Caltech team's approach is that the DNA tiles assemble into a crystal spontaneously. Comprising a knot of four DNA strands, each DNA tile has four loose ends known as "sticky ends." These sticky ends are what binds one DNA tile to another. A sticky end with a particular DNA sequence can be thought of as a special type of glue, one that only binds to a sticky end with a complementary DNA sequence, a special "anti-glue''. For their experiments, the authors just mixed the DNA tiles into salt water and let the sticky ends do the work, self-assembling the tiles into a Sierpinski triangle. In nanotechnology this "hands off" approach to manufacturing is a desirable property, and a common theme.

The novel aspect of the research is the translation of an algorithm--the basic method underlying a computer program--into the process of crystal growth. A well-known algorithm for drawing a Sierpinski triangle starts with a sequence of 0s and 1s. It redraws the sequence over and over again, filling up successive rows on a piece of paper, each time performing binary addition on adjacent digits.

The result is a Sierpinski triangle built out of 0s and 1s. To embed this algorithm in crystal growth, the scientists represented written rows of binary "0s" and "1s" as rows of DNA tiles in the crystal--some tiles stood for 0, and others for 1. To emulate addition, the sticky ends were designed to ensure that whenever a free tile stuck to tiles already in the crystal, it represented the sum of the tiles it was sticking to.

The process was not without error, however. Sometimes DNA tiles stuck in the wrong place, computing the wrong sum, and destroying the pattern. The largest perfect Sierpinski triangle that grew contained only about 200 DNA tiles. But it is the first time any such thing has been done and the researchers believe they can reduce errors in the future.

In fact the work is the first experimental demonstration of a theoretical concept that Winfree has been developing since 1995--his proposal that any algorithm can be embedded in the growth of a crystal. This concept, according to Winfree's coauthor and Caltech research fellow Paul W. K. Rothemund, has inspired an entirely new research field, "algorithmic self-assembly," in which scientists study the implications of embedding computation into crystal growth.

"A growing group of researchers has proposed a series of ever more complicated computations and patterns for these crystals, but until now it was unclear that even the most basic of computations and patterns could be achieved experimentally," Rothemund says.

Whether larger, more complicated computations and patterns can be created depends on whether Winfree's team can reduce the errors. Whether the crystals will be useful in nanotechnology may depend on whether the patterns can be turned into electronic devices and circuits, a possibility being explored at other universities including Duke and Purdue.

Nanotechnology applications aside, the authors contend that the most important implication of their work may be a better understanding of how computation shapes the physical world around us. "If algorithmic concepts can be successfully adapted to the molecular context," the authors write, "the algorithm would join energy and entropy as essential concepts for understanding how physical processes create order."

Winfree is an assistant professor of computation and neural systems and computer science; Rothemund is a senior research fellow in computer science and computation and neural systems. The third author is Nick Papadakis, a former staff member in computer science.

 

Writer: 
Robert Tindol
Writer: 

Pages