Caltech Physics Team Invents DeviceFor Weighing Individual Molecules

PASADENA, Calif.-Physicists at the California Institute of Technology have created the first nanodevices capable of weighing individual biological molecules. This technology may lead to new forms of molecular identification that are cheaper and faster than existing methods, as well as revolutionary new instruments for proteomics.

According to Michael Roukes, professor of physics, applied physics, and bioengineering at Caltech and the founding director of Caltech's Kavli Nanoscience Institute, the technology his group has announced this week shows the immense potential of nanotechnology for creating transformational new instrumentation for the medical and life sciences. The new devices are at the nanoscale, he explains, since their principal component is significantly less than a millionth of a meter in width.

The Caltech devices are "nanoelectromechanical resonators"--essentially tiny tuning forks about a micron in length and a hundred or so nanometers wide that have a very specific frequency at which they vibrate when excited. Just as a bronze bell rings at a certain frequency based on its size, shape, and composition, these tiny tuning forks ring at their own fundamental frequency of mechanical vibration, although at such a high pitch that the "notes" are nearly as high in frequency as microwaves.

The researchers set up electronic circuitry to continually excite and monitor the frequency of the vibrating bar. Intermittently, a shutter is opened to expose the nanodevice to an atomic or molecular beam, in this case a very fine "spray" of xenon atoms or nitrogen molecules. Because the nanodevice is cooled, the molecules condense on the bar and add their mass to it, thereby lowering its frequency. In other words, the mechanical vibrations of the now slightly-more-massive nanodevice become slightly lower in frequency--just as thicker, heavier strings on an instrument sound notes that are lower than lighter ones.

Because frequency can be measured so precisely in physics labs, the researchers are then able to evaluate extremely subtle changes in mass of the nanodevice, and therefore, the weight of the added atoms or molecules.

Roukes says that their current generation of devices is sensitive to added mass at the level of a few zeptograms, which is few billionths of a trillionth of a gram. In their experiments this represents about thirty xenon atoms-- and it is the typical mass of an individual protein molecule.

"We hope to transform this chip-based technology into systems that are useful for picking out and identifying specific molecules, one-by-one--for example certain types of proteins secreted in the very early stages of cancer," Roukes says.

"The fundamental problem with identifying these proteins is that one must sort through millions of molecules to make the measurement. You need to be able to pick out the 'needle' from the 'haystack,' and that's hard to do, among other reasons because 95 percent of the proteins in the blood have nothing to do with cancer."

The new method might ultimately permit the creation of microchips, each possessing arrays of miniature mass spectrometers, which are devices for identifying molecules based on their weight. Today, high-throughput proteomics searches are often done at facilities possessing arrays of conventional mass spectrometers that fill an entire laboratory and can cost upwards of a million dollars each, Roukes adds. By contrast, future nanodevice-based systems should cost a small fraction of today's technology, and an entire massively-parallel nanodevice system will probably ultimately fit on a desktop.

Roukes says his group has technology in hand to push mass-sensing technology to even more sensitive levels, probably to the point that individual hydrogen atoms can be weighed. Such an intricately accurate method of determining atomic-scale masses would be quite useful in areas such as quantum optics, in which individual atoms are manipulated.

The next step for Roukes' team at Caltech is to engineer the interfaces so that individual biological molecules can be weighed. For this, the team will likely collaborate with various proteomics labs for side-by-side comparisons of already known information on the mass of biological molecules with results obtained with the new method.

Roukes announced the technology in Los Angeles on Wednesday, March 24, at a news conference during the annual American Physical Society convention. Further results will be published in the near future.

The Caltech team behind the zepto result included Dr. Ya-Tang Yang, former graduate student in applied physics, now at Applied Materials; Dr. Carlo Callegari, former postdoctoral associate, now a professor at the University of Graz, Austria; Xiaoli Feng, current graduate student in electrical engineering; and Dr. Kamil Ekinci former postdoctoral associate, now a professor at Boston University.

Writer: 
Robert Tindol
Writer: 

Scientists Discover What You Are Thinking

PASADENA, Calif. - By decoding signals coming from neurons, scientists at the California Institute of Technology have confirmed that an area of the brain known as the ventrolateral prefrontal cortex (vPF) is involved in the planning stages of movement, that instantaneous flicker of time when we contemplate moving a hand or other limb. The work has implications for the development of a neural prosthesis, a brain-machine interface that will give paralyzed people the ability to move and communicate simply by thinking.

By piggybacking on therapeutic work being conducted on epileptic patients, Daniel Rizzuto, a postdoctoral scholar in the lab of Richard Andersen, the Boswell Professor of Neuroscience, was able to predict where a target the patient was looking at was located, and also where the patient was going to move his hand. The work currently appears in the online version of Nature Neuroscience.

Most research in this field involves tapping into the areas of the brain that directly control motor actions, hoping that this will give patients the rudimentary ability to move a cursor, say, or a robotic arm with just their thoughts. Andersen, though, is taking a different tack. Instead of the primary motor areas, he taps into the planning stages of the brain, the posterior parietal and premotor areas.

Rizzuto looked at another area of the brain to see if planning could take place there as well. Until this work, the idea that spatial processing or movement planning took place in the ventrolateral prefrontal cortex has been a highly contested one. "Just the fact that these spatial signals are there is important," he says. "Based upon previous work in monkeys, people were saying this was not the case." Rizzuto's work is the first to show these spatial signals exist in humans.

Rizzuto took advantage of clinical work being performed by Adam Mamelak, a neurosurgeon at Huntington Memorial Hospital in Pasadena. Mamelak was treating three patients who suffered from severe epilepsy, trying to identify the brain areas where the seizures occurred and then surgically removing that area of the brain. Mamelak implanted electrodes into the vPF as part of this process.

"So for a couple of weeks these patients are lying there, bored, waiting for a seizure," says Rizzuto, "and I was able to get their permission to do my study, taking advantage of the electrodes that were already there." The patients watched a computer screen for a flashing target, remembered the target location through a short delay, then reached to that location. "Obviously a very basic task," he says.

"We were looking for the brain regions that may be contributing to planned movements. And what I was able to show is that a part of the brain called the ventrolateral prefrontal cortex is indeed involved in planning these movements." Just by analyzing the brain activity from the implanted electrodes using software algorithms that he wrote, Rizzuto was able to tell with very high accuracy where the target was located while it was on the screen, and also what direction the patient was going to reach to when the target wasn't even there.

Unlike most labs doing this type of research, Andersen's lab is looking at the planning areas of the brain rather than the primary motor area of the brain, because they believe the planning areas are less susceptible to damage. "In the case of a spinal cord injury," says Rizzuto, "communication to and from the primary motor cortex is cut off." But the brain still performs the computations associated with planning to move. "So if we can tap into the planning computations and decode where a person is thinking of moving," he says, then it just becomes an engineering problem--the person can be hooked up to a computer where he can move a cursor by thinking, or can even be attached to a robotic arm.

Andersen notes, "Dan's results are remarkable in showing that the human ventral prefrontal cortex, an area previously implicated in processing information about objects, also processes the intentions of subjects to make movements. This research adds ventral prefrontal cortex to the list of candidate brain areas for extracting signals for neural prosthetics applications."

In Andersen's lab, Rizzuto's goal is to take the technology they've perfected in animal studies to human clinical trials. "I've already met with our first paralyzed patient, and graduate student Hilary Glidden and I are now doing noninvasive studies to see how the brain reorganizes after paralysis," he says. If it does reorganize, he notes, all the technology that has been developed in non-paralyzed humans may not work. "This is why we think our approach may be better, because we already know that the primary motor area shows pathological reorganization and degeneration after paralysis. We think our area of the brain is going to reorganize less, if at all. After this we hope to implant paralyzed patients with electrodes so that they may better communicate with others and control their environment."

Writer: 
MW
Exclude from News Hub: 
No

New study provides insights into the brain's remembrance of emotional events

PASADENA, Calif.--Those of us who are old enough to remember the Kennedy assassination are usually able to remember the initial announcement almost as if it's a movie running in our heads. That's because there is a well-known tendency for people to have enhanced memory of a highly emotional event, and further, a memory that focuses especially on the "gist" of the event.

In other words, people who remember the words "President Kennedy is dead" will remember the news extraordinarily well. But at the same time, they will likely have no more recollection of extraneous details such as what they were wearing or what they were doing an hour before hearing the news than they would for any other day in 1963. Neurobiologists have known both these phenomena to be true for some time, and a new study now explains how the brain achieves this effect.

In the new study, researchers from the California Institute of Technology and the University of Iowa College of Medicine show how the recollections of gist and details of emotional events are related to specific parts of the brain. In an article appearing in this month's Nature Neuroscience, the authors report that patients with damage to an area of the brain known as the amygdala are unable to remember the gist of an emotional stimulus, even though there is nothing otherwise faulty in their memory. The study shows that the amygdala somehow focuses the brain's processing resources on the gist of an emotional event.

"During a highly emotional event, like the Kennedy assassination, 9/11, or the Challenger accident, you remember the gist much better than you would remember the gist of some other neutral event," says Ralph Adolphs, a professor of psychology and neuroscience at Caltech and lead author of the study. "But people with damage to the amygdala have a failure to put this special tag on the gist of emotional memories. In other words, they remember the gist of an emotional event no better than the gist of a neutral event."

To test their hypothesis, Adolphs and his colleagues at the University of Iowa College of Medicine showed a group of normal control subjects and a group of test subjects known to have amygdala damage a series of pictures accompanied by fabricated stories. One type of series involved fairly mundane episodes in which, for example, a family was depicted driving somewhere and returning home uneventfully. But in the other series, the story would relate a tragic event, such as the family having been involved in a fatal auto accident on the way home, accompanied with gruesome pictures of amputated limbs.

As expected, the normal control subjects had enhanced recall of the emotional stories and pictures, and far more vague recall of the mundane stories. The test subjects with amygdala damage, however, possessed no better recall of the gist of the emotional story than of the mundane stories. On the other hand, both the control group and the group with amygdala damage showed about equal ability to remember details from stories with no emotional content.

The findings suggest that the amygdala is responsible for our ability to have strong recollections of emotional events, Adolphs says. Further study could point to how the amygdala is involved in impaired real-life emotional memories seen in patients with post-traumatic stress disorder and Alzheimer's disease, he adds.

The other authors of the article are Daniel Tranel and Tony W. Buchanan, both of the University of Iowa College of Medicine's Department of Neurology.

Writer: 
Robert Tindol
Writer: 

Negative Impacts of Dam Construction on Human Populations Can Be Reduced, Author Says

PASADENA, Calif.--Despite the adverse impacts of large dam construction on ecosystems and human settlements, more and more dams are likely to be built in the 21st century wherever there is a need to store water for irrigated agriculture, urban water supplies, and power generation. But world societies and governments would do well to evaluate the consequences of dam construction as an integral part of the planning process, a leading authority writes in a new book.

The book, The Future of Large Dams, is the latest work by California Institute of Technology anthropologist Thayer Scudder, who is arguably the world's foremost expert on the impact of dam construction on human societies living along major world rivers. Published by Earthscan, the book argues that the early analysis by affected stakeholders of the impact of a dam's proposed construction is a worthwhile undertaking. And not only is it worthwhile, but also is quite possible to accomplish with established research techniques.

According to Scudder, large dams are a "flawed yet still necessary development option." Flaws include both the shortcomings of the dam itself as well as ecological and social impacts. In terms of the former, Scudder says that dams, on the average, can be expected to get clogged with sediment at a rate of about 0.5 to 1 percent per year. And in terms of the latter, changing habitat caused by the flooding of land behind and below dams is certain to change the habits of nearby humans and animals alike--if not devastate both.

"Although dams have their problems, they're unfortunately still necessary because of the growing needs of humans for water storage," says Scudder. "That's the dilemma."

Given that governments throughout the world-- the United States included--will continue to dam rivers, Scudder says it's important to take into consideration that hundreds of millions of people have been adversely affected by dams in the last century. Somewhere between 40 and 80 million people have been forcibly relocated by the flooding of the land on which they live to create the reservoirs above the dams. Furthermore, even larger numbers of people have had their lives and livelihoods disrupted by the change of the river flow below dams.

"Lots of people in many places in the world are dependent on the natural flow of rivers, and the consequences can be the sort of things you might not normally even take into account," he says. "For example, a settlement that depends on an annual flooding of agricultural land when the river rises can be wiped out if the regulated flow of the dam causes the annual flooding to cease."

Scudder, in fact, wrote his doctoral dissertation many years ago on such an instance, in which the construction of a dam obliterated the most productive component of an upstream farming system.

"But the book argues that, despite these adverse impacts, there are state-of-the-art ways of addressing them," he says. "For example, if local populations downstream have been depending on an annual inundation of an agricultural flood plain, then the authorities in charge and other stakeholders should consider a controlled release of water that recreates the flooding conditions. Experiments have been done with coordinating hydropower generation and flood recession irrigation needs with the release of 'environmental flows'--that is, releases of water to protect habitats and communities. This approach has been tried in several African countries, and research has shown in other cases that managed floods would be a 'win-win' option."

In general, the way to make dams work for humans everywhere, Scudder suggests, is to address the social and environmental impacts both downstream and upstream of any dam project before the structure is even built, and to evaluate the situations in river basins where dams have already been constructed.

Finally, the political and institutional consideration of dam construction should be addressed, Scudder says. Too often, a dam project is undertaken at a specific locale because of its political expedience, and this is not the best way to minimize the negative human and ecological impact. Restructuring governmental departments that oversee dams can also maximize negative environmental, agricultural, or other impacts.

"We should all be able to benefit from the dams that are to be built in the future rather than suffer from them," he concludes.

Review copies of the book are available from Earthscan Sales and Marketing Administrator Michael Fell by e-mailing him at Michael.Fell@earthscan.co.uk or calling +44 (0)20 7121 3154.

 

Writer: 
Robert Tindol
Writer: 

Neuroscientists discover that humans evaluate emotions by looking at the eyes

PASADENA, Calif.--If your mother ever told you to watch out for strangers with shifty eyes, you can start taking her advice to heart. Neuroscientists exploring a region of the brain associated with the recognition of emotional expressions have concluded that it is the eye region that we scan when our brains process information about other people's emotions.

Reporting in the January 6 issue of the journal Nature, California Institute of Technology neuroscientist Ralph Adolphs and colleagues at the University of Iowa, University of Montreal, and University of Glasgow describe new results they have obtained with a patient suffering from a rare genetic malady that has destroyed her brain's amygdala. The amygdala are found in each side of the brain in the medial temporal lobe and are known to process information about facial emotions. The patient, who has been studied by the researchers at the University of Iowa for a decade, shows an intriguing inability to recognize fear and other emotions from facial expressions.

"The fact that the amygdala is involved in fear recognition has been borne out by a large number of studies," explains Adolphs. "But until now the mechanisms through which amygdala damage compromises fear recognition have not been identified."

Although Adolphs and his colleagues have known for years that the woman is unable to recognize fear from facial expressions in others, they didn't know until recently that her problem was an inability to focus on the eye region of others when judging their emotions. They discovered this by carefully recording the way her eyes focused on pictures of faces.

In normal test subjects, a person's eyes dart from area to area of a face in a quick, largely unconscious program of evaluating facial expressions to recognize emotions. The woman, by contrast, tended to stare straight ahead at the photographs, displaying no tendency to regard the eyes at all. As a result, she was nonjudgmental in her interpersonal dealings, often trusting even those individuals who didn't deserve the benefit of the doubt.

However, the good news is that the woman could be trained to look at the eyes in the photographs, even though she had no natural inclination to do so. When she deliberately looked at the eyes upon being instructed to do so, she had a normal ability to recognize fear in the faces.

According to Adolphs, the study is a step forward in better understanding the human brain's perceptual mechanisms, and also a practical key in possible therapies to help certain patients with defective emotional perception lead more normal lives.

In terms of the former, Adolphs says that the amygdala's role in fear recognition will probably be better understood with additional research such as that now going on in Caltech's new magnetic resonance imaging lab. "It would be naïve to ascribe these findings to one single brain structure," he says. "Many parts of the brain work together, so a more accurate picture will probably relate cognitive abilities to a network of brain structures.

"Therefore, the things the amygdala do together with other parts of the brain are going to be a complex matter that will take a long time to figure out."

However, the very fact that the woman could be trained to evaluate fear in other people's faces is encouraging news for individuals with autism and other maladies that cause problems in their recognizing other people's emotions, Adolphs says.

"Maybe people with autism could be helped if they were trained how to look at the world and how to look at people's faces to improve their social functioning," he says.

Adolphs is a professor of psychology and neuroscience at Caltech, and holds a joint appointment at the University of Iowa College of Medicine. The other authors of the paper are Frederic Gosselin, Tony Buchanan, Daniel Tranel, Philippe Schyns, and Antonio Damasio.

Writer: 
Robert Tindol
Writer: 

The Science behind the Aceh Earthquake

PASADENA, Calif. - Kerry Sieh, the Robert P. Sharp Professor of Geology at the California Institute of Technology and a member of Caltech's Tectonics Observatory, has conducted extensive research on both the Sumatran fault and the Sumatran subduction zone. Below, Sieh provides scientific background and context for the December 26, 2004 earthquake that struck Aceh, Indonesia.

The earthquake that struck northern Sumatra on December 26, 2004, was the world's largest earthquake since the great (magnitude 9.2) Alaskan earthquake of 1964. The great displacements of the sea floor associated with the earthquake produced exceptionally large tsunami waves that spread death and destruction throughout the Bay of Bengal, from Northern Sumatra to Thailand, Sri Lanka, and India.

The earthquake originated along the boundary between the Indian/Australian and Eurasian tectonic plates, which arcs 5,500 kilometers (3,400 miles) from Myanmar past Sumatra and Java toward Australia see Figure 1. Near Sumatra, the Indian/Australian plate is moving north-northeast at about 60 millimeters (2.4 in.) per year with respect to Southeast Asia. The plates meet 5 kilometers (3 miles) beneath the sea at the Sumatran Trench, on the floor of the Indian Ocean Figure 2. The trench runs roughly parallel to the western coast of Sumatra, about 200 kilometers (125 miles) offshore. At the trench, the Indian/Australian plate is being subducted; that is, it is diving into the earth's interior and being overridden by Southeast Asia. The contact between the two plates is an earthquake fault, sometimes called a "megathrust." Figure 3 The two plates do not glide smoothly past each other along the megathrust but move in "stick-slip" fashion. This means that the megathrust remains locked for centuries, and then slips suddenly a few meters, generating a large earthquake.

History reveals that the subduction megathrust does not rupture all at once along the entire 5,500-kilometer plate boundary. The U.S. Geological Survey reports that the rupture began just north of Simeulue Island Figure 4. From the analysis of seismograms, Caltech seismologist Chen Ji has found that from this origin point, the major rupture propagated northward about 400 kilometers (249 miles) along the megathrust at about two kilometers per second. By contrast, the extent of major aftershocks suggests that the rupture extended about a thousand kilometers (620 miles) northward to the vicinity of the Andaman Islands. During the rupture, the plate on which Sumatra and the Andaman Islands sit lurched many meters westward over the Indian plate.

The section of the subduction megathrust that runs from Myanmar southward across the Andaman Sea, then southeastward off the west coast of Sumatra, has produced many large and destructive earthquakes in the past two centuries Figure 5. In 1833, rupture of a long segment offshore central Sumatra produced an earthquake of about magnitude 8.7 and attendant large tsunamis. In 1861, a section just north of the equator produced a magnitude 8.5 earthquake and large tsunamis. Other destructive historical earthquakes and tsunamis have been smaller. A segment to the north of the Nicobar Islands ruptured in 1881, generating an earthquake with an estimated magnitude of 7.9. A short segment farther to the south, under the Batu Islands, ruptured in 1935 (magnitude 7.7). A segment under the Enganno Island ruptured in 2000 (magnitude 7.8), and a magnitude 7.4 precursor to the recent earthquake occurred in late 2002, under Simeulue Island.

This recent earthquake was generated by the seismic rupture of only the northernmost portion of the Sumatran section of the megathrust. Therefore, the fact that most of the other part of the section has generated few great earthquakes in more than a hundred years is worrisome. Paleoseismic research has shown that seismic ruptures like the one in 1833, for example, recur about every two centuries. Thus, other parts within the section of this fault should be considered dangerous over the next few decades.

During rupture of a subduction megathrust, the portion of Southeast Asia that overlies the megathrust jumps westward (toward the trench) by several meters, and upward by 1-3 meters (3-10 feet). This raises the overlying ocean, so that there is briefly a "hill" of water about 1-3 meters high overlying the rupture. The flow of water downward from this hill triggers a series of broad ocean waves that are capable of traversing the entire Bay of Bengal. When the waves reach shallow water they slow down and increase greatly in height--up to 10 meters (32 feet) or so in the case of the December 26 earthquake--and thus are capable of inundating low-lying coastal areas.

Although the tsunami waves subside in a short period of time, some coastal areas east of the megathrust sink by a meter or so, leading to permanent swamping of previously dry, habitable ground. Islands above the megathrust rise 1 to 3 meters, so that shallow coral reefs emerge from the sea. Such long-term changes resulting from the December 26 earthquake will be mapped in the next few months by Indonesian geologists and their colleagues.

Writer: 
MW
Exclude from News Hub: 
No

More Stormy Weather on Titan

PASADENA, Calif.— Titan, it turns out, may be a very stormy place. In 2001, a group of astronomers led by Henry Roe, now a postdoctoral scholar at the California Institute of Technology, discovered methane clouds near the south pole of Saturn's largest moon, resolving a debate about whether such clouds exist amid the haze of its atmosphere.

Now Roe and his colleagues have found similar atmospheric disturbances at Titan's temperate mid-latitudes, about halfway between the equator and the poles. In a bit of ironic timing, the team made its discovery using two ground-based observatories, the Gemini North and Keck 2 telescopes on Mauna Kea, in Hawaii, in the months before the Cassini spacecraft arrived at Saturn and Titan. The work will appear in the January 1, 2005, issue of the Astrophysical Journal.

"We were fortunate to catch these new mid-latitude clouds when they first appeared in late 2003 and early 2004," says Roe, who is a National Science Foundation Astronomy and Astrophysics Postdoctoral Scholar at Caltech. Much of the credit goes to the resolution and sensitivity of the two ground-based telescopes and their use of adaptive optics, in which a flexible mirror rapidly compensates for the distortions caused by turbulence in the Earth's atmosphere. These distortions are what cause the well-known twinkling of the stars. Using adaptive optics, details as small as 300 kilometers across can be distinguished despite the enormous distance of Titan (1.3 billion kilometers). That's equivalent to reading an automobile license plate from 100 kilometers away.

Still to be determined, though, is the cause of the clouds. According to Chad Trujillo, a former Caltech postdoctoral scholar and now a scientist at the Gemini Observatory, Titan's weather patterns can be stable for many months, with only occasional bursts of unusual activity like these recently discovered atmospheric features.

Like Earth, Titan's atmosphere is mostly nitrogen. Unlike Earth, Titan is inhospitable to life due to the lack of atmospheric oxygen and to its extremely cold surface temperatures (-297 degrees Fahrenheit). Along with nitrogen, Titan's atmosphere also contains a significant amount of methane, which may be the cause of the mid-latitude clouds.

Conditions on Earth allow water to exist in liquid, solid, or vapor states, depending on localized temperatures and pressures. The phase changes of water between these states are an important factor in the formation of weather in our atmosphere. But on Titan, methane rules. The moon's atmosphere is so cold that any water is frozen solid, but methane can move between liquid, solid, and gaseous states. This leads to a methane meteorological cycle on Titan that is similar to the water-based weather cycle on Earth.

While the previously discovered south polar clouds are thought to be a result of solar surface heating, the new mid-latitude clouds cannot be formed by the same mechanism. One possible explanation for the new clouds is a seasonal shift in the global winds. More likely, says Roe, surface activity might be disturbing the atmosphere at the mid-latitude location. Geysers of methane slush may be brewing up from below, or a warm spot on Titan's surface may be heating the atmosphere. Cryovolcanism--volcanic activity that spews an icy mix of chemicals--is another mechanism that could cause disturbances. Hints about what is happening on this frigid world could be obtained as the Huygens probe, which will be released from Cassini on Christmas day, drops through Titan's atmosphere in January 2005.

If the clouds are being caused by these geological conditions, says Roe, they should stay at the observed 40-degree latitude and repeatedly occur above the same surface feature or features. Meanwhile, if a seasonal shift in the winds is forming the clouds then their locations should move northward as Titan's season progresses into southern summer. "Continued observations with the Gemini and Keck telescopes will easily distinguish between these two scenarios," says Roe.

The Gemini observatory is operated by the Association of Universities for Research in Astronomy under a cooperative agreement with the National Science Foundation, involving the National Optical Astronomy Observatory, AURA, and the NSF as the U.S. partner. The W.M. Keck Observatory is operated by the California Association for Research in Astronomy, a scientific partnership between the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration.

Writer: 
JP
Writer: 
Exclude from News Hub: 
No

Physicists at Caltech, UT Austin ReportBose-Einstein Condensation of Cold Excitons

PASADENA, Calif.-Bose-Einstein condensates are enigmatic states of matter in which huge numbers of particles occupy the same quantum state and, for all intents and purposes, lose their individual identity. Predicted long ago by Albert Einstein and Satyendranath Bose, these bizarre condensates have recently become one of the hottest topics in physics research worldwide.

Now, physicists at the California Institute of Technology and the University of Texas at Austin have created a sustained Bose-Einstein condensate of excitons, unusual particles that inhabit solid semiconductor materials. By contrast, most recent work on the phenomenon has focused on supercooled dilute gases, in which the freely circulating atoms of the gas are reduced to a temperature where they all fall into the lowest-energy quantum state. The new Caltech-UT Austin results are being published this week in the journal Nature.

According to Jim Eisenstein, who is the Roshek Professor of Physics at Caltech and co-lead author of the paper, exciton condensation was first predicted over 40 years ago but has remained undiscovered until now because the excitons usually decay in about a billionth of a second. In this new work, the researchers created stable excitons, which consist of an electron in one layer of a sandwich-like semiconductor structure bound to a positively charged "hole" in an adjacent layer. A hole is the vacancy created when an electron is removed from a material.

Bound together, the electron and hole form a "boson," a type of particle that does not mind crowding together with other similar bosons into the same quantum state. The other type of particle in the universe, "fermions," include individual protons and electrons and neutrons. Only one fermion is allowed to occupy a given quantum state.

The picture is complex, but if one imagines two layers of material, one containing some electrons, the other completely empty, the results are somewhat easier to visualize. Begin by transferring half of the electrons from the full layer to the empty one. The resulting situation is equivalent to a layer of electrons in parallel with a layer of holes. And because the electron has a negative charge, the taking away of an electron means that the hole in which it once existed has a positive charge.

The difficult thing about the procedure is that the layers have to be positioned just right and a large magnetic field has to be applied just right in order to avoid swamping the subtle binding of the electron and hole by other forces in the system. The magnetic field is also essential for stabilizing the excitons and preventing their decay.

Eisenstein says that the simplest experiment consists of sending electrical currents through the two layers in opposite directions. The "smoking gun" for exciton condensation is the absence of the ubiquitous sideways force experienced by charged particles moving in magnetic fields. Excitons, which have no net charge, should not feel such a force.

One mystery that remains is the tendency of the excitons to dump a small amount of energy when they move. "We find that, as we go toward lower temperatures, energy dissipation does become smaller and smaller," Eisenstein says. "But we expected no energy dissipation at all.

"Therefore, this is not really an ideal superfluid--so far it is at best a bad one."

The other author of the paper is Allan MacDonald, who holds the Sid W. Richardson Foundation Regents Chair in physics at UT Austin and is a specialist in theoretical condensed matter physics.

Writer: 
Robert Tindol
Writer: 

Caltech computer scientists embed computation in a DNA crystal to create microscopic patterns

PASADENA, Calif.--In a demonstration that holds promise for future advances in nanotechnology, California Institute of Technology computer scientists have succeeded in building a DNA crystal that computes as it grows. As the computation proceeds, it creates a triangular fractal pattern in the DNA crystal.

This is the first time that a computation has been embedded in the growth of any crystal, and the first time that computation has been used to create a complex microscopic pattern. And, the researchers say, it is one step in the dream of nanoscientists to master construction techniques at the molecular level.

Reporting in the December issue of the journal Public Library of Science (PLoS) Biology, Caltech assistant professor Erik Winfree and his colleagues show that DNA "tiles" can be programmed to assemble themselves into a crystal bearing a pattern of progressively smaller "triangles within triangles," known as a Sierpinski triangle. This fractal pattern is more complex than patterns found in natural crystals because it never repeats. Natural crystals, by contrast, all bear repeating patterns like those commonly found in the tiling of a bathroom floor. And, because each DNA tile is a tiny knot of DNA with just 150 base pairs (an entire human genome has some 3 billion), the resulting Sierpinski triangles are microscopic. The Winfree team reports growing micron-size DNA crystals (about a hundredth the width of a human hair) that contain numerous Sierpinski triangles.

A key feature of the Caltech team's approach is that the DNA tiles assemble into a crystal spontaneously. Comprising a knot of four DNA strands, each DNA tile has four loose ends known as "sticky ends." These sticky ends are what binds one DNA tile to another. A sticky end with a particular DNA sequence can be thought of as a special type of glue, one that only binds to a sticky end with a complementary DNA sequence, a special "anti-glue''. For their experiments, the authors just mixed the DNA tiles into salt water and let the sticky ends do the work, self-assembling the tiles into a Sierpinski triangle. In nanotechnology this "hands off" approach to manufacturing is a desirable property, and a common theme.

The novel aspect of the research is the translation of an algorithm--the basic method underlying a computer program--into the process of crystal growth. A well-known algorithm for drawing a Sierpinski triangle starts with a sequence of 0s and 1s. It redraws the sequence over and over again, filling up successive rows on a piece of paper, each time performing binary addition on adjacent digits.

The result is a Sierpinski triangle built out of 0s and 1s. To embed this algorithm in crystal growth, the scientists represented written rows of binary "0s" and "1s" as rows of DNA tiles in the crystal--some tiles stood for 0, and others for 1. To emulate addition, the sticky ends were designed to ensure that whenever a free tile stuck to tiles already in the crystal, it represented the sum of the tiles it was sticking to.

The process was not without error, however. Sometimes DNA tiles stuck in the wrong place, computing the wrong sum, and destroying the pattern. The largest perfect Sierpinski triangle that grew contained only about 200 DNA tiles. But it is the first time any such thing has been done and the researchers believe they can reduce errors in the future.

In fact the work is the first experimental demonstration of a theoretical concept that Winfree has been developing since 1995--his proposal that any algorithm can be embedded in the growth of a crystal. This concept, according to Winfree's coauthor and Caltech research fellow Paul W. K. Rothemund, has inspired an entirely new research field, "algorithmic self-assembly," in which scientists study the implications of embedding computation into crystal growth.

"A growing group of researchers has proposed a series of ever more complicated computations and patterns for these crystals, but until now it was unclear that even the most basic of computations and patterns could be achieved experimentally," Rothemund says.

Whether larger, more complicated computations and patterns can be created depends on whether Winfree's team can reduce the errors. Whether the crystals will be useful in nanotechnology may depend on whether the patterns can be turned into electronic devices and circuits, a possibility being explored at other universities including Duke and Purdue.

Nanotechnology applications aside, the authors contend that the most important implication of their work may be a better understanding of how computation shapes the physical world around us. "If algorithmic concepts can be successfully adapted to the molecular context," the authors write, "the algorithm would join energy and entropy as essential concepts for understanding how physical processes create order."

Winfree is an assistant professor of computation and neural systems and computer science; Rothemund is a senior research fellow in computer science and computation and neural systems. The third author is Nick Papadakis, a former staff member in computer science.

 

Writer: 
Robert Tindol
Writer: 

Internet Speed Quadrupled by International Team During 2004 Bandwidth Challenge

PITTSBURGH, Pa.--For the second consecutive year, the "High Energy Physics" team of physicists, computer scientists, and network engineers have won the Supercomputing Bandwidth Challenge with a sustained data transfer of 101 gigabits per second (Gbps) between Pittsburgh and Los Angeles. This is more than four times faster than last year's record of 23.2 gigabits per second, which was set by the same team.

The team hopes this new demonstration will encourage scientists and engineers in many sectors of society to develop and deploy a new generation of revolutionary Internet applications.

The international team is led by the California Institute of Technology and includes as partners the Stanford Linear Accelerator Center (SLAC), Fermilab, CERN, the University of Florida, the University of Manchester, University College London (UCL) and the organization UKLight, Rio de Janeiro State University (UERJ), the state universities of São Paulo (USP and UNESP), the Kyungpook National University, and the Korea Institute of Science and Technology Information (KISTI). The group's "High-Speed TeraByte Transfers for Physics" record data transfer speed is equivalent to downloading three full DVD movies per second, or transmitting all of the content of the Library of Congress in 15 minutes, and it corresponds to approximately 5% of the rate that all forms of digital content were produced on Earth during the test.

The new mark, according to Bandwidth Challenge (BWC) sponsor Wesley Kaplow, vice president of engineering and operations for Qwest Government Services exceeded the sum of all the throughput marks submitted in the present and previous years by other BWC entrants. The extraordinary achieved bandwidth was made possible in part through the use of the FAST TCP protocol developed by Professor Steven Low and his Caltech Netlab team. It was achieved through the use of seven 10 Gbps links to Cisco 7600 and 6500 series switch-routers provided by Cisco Systems at the Caltech Center for Advanced Computing (CACR) booth, and three 10 Gbps links to the SLAC/Fermilab booth. The external network connections included four dedicated wavelengths of National LambdaRail, between the SC2004 show floor in Pittsburgh and Los Angeles (two waves), Chicago, and Jacksonville, as well as three 10 Gbps connections across the Scinet network infrastructure at SC2004 with Qwest-provided wavelengths to the Internet2 Abilene Network (two 10 Gbps links), the TeraGrid (three 10 Gbps links) and ESnet. 10 gigabit ethernet (10 GbE) interfaces provided by S2io were used on servers running FAST at the Caltech/CACR booth, and interfaces from Chelsio equipped with transport offload engines (TOE) running standard TCP were used at the SLAC/FNAL booth. During the test, the network links over both the Abilene and National Lambda Rail networks were shown to operate successfully at up to 99 percent of full capacity.

The Bandwidth Challenge allowed the scientists and engineers involved to preview the globally distributed grid system that is now being developed in the US and Europe in preparation for the next generation of high-energy physics experiments at CERN's Large Hadron Collider (LHC), scheduled to begin operation in 2007. Physicists at the LHC will search for the Higgs particles thought to be responsible for mass in the universe and for supersymmetry and other fundamentally new phenomena bearing on the nature of matter and spacetime, in an energy range made accessible by the LHC for the first time.

The largest physics collaborations at the LHC, the Compact Muon Solenoid (CMS), and the Toroidal Large Hadron Collider Apparatus (ATLAS), each encompass more than 2000 physicists and engineers from 160 universities and laboratories spread around the globe. In order to fully exploit the potential for scientific discoveries, many petabytes of data will have to be processed, distributed, and analyzed. The key to discovery is the analysis phase, where individual physicists and small groups repeatedly access, and sometimes extract and transport, terabyte-scale data samples on demand, in order to optimally select the rare "signals" of new physics from potentially overwhelming "backgrounds" from already-understood particle interactions. This data will be drawn from major facilities at CERN in Switzerland, at Fermilab and the Brookhaven lab in the U.S., and at other laboratories and computing centers around the world, where the accumulated stored data will amount to many tens of petabytes in the early years of LHC operation, rising to the exabyte range within the coming decade.

Future optical networks, incorporating multiple 10 Gbps links, are the foundation of the grid system that will drive the scientific discoveries. A "hybrid" network integrating both traditional switching and routing of packets, and dynamically constructed optical paths to support the largest data flows, is a central part of the near-term future vision that the scientific community has adopted to meet the challenges of data intensive science in many fields. By demonstrating that many 10 Gbps wavelengths can be used efficiently over continental and transoceanic distances (often in both directions simultaneously), the high-energy physics team showed that this vision of a worldwide dynamic grid supporting many-terabyte and larger data transactions is practical.

While the SC2004 100+ Gbps demonstration required a major effort by the teams involved and their sponsors, in partnership with major research and education network organizations in the United States, Europe, Latin America, and Asia Pacific, it is expected that networking on this scale in support of largest science projects (such as the LHC) will be commonplace within the next three to five years.

The network has been deployed through exceptional support by Cisco Systems, Hewlett Packard, Newisys, S2io, Chelsio, Sun Microsystems, and Boston Ltd., as well as the staffs of National LambdaRail, Qwest, the Internet2 Abilene Network, the Consortium for Education Network Initiatives in California (CENIC), ESnet, the TeraGrid, the AmericasPATH network (AMPATH), the National Education and Research Network of Brazil (RNP) and the GIGA project, as well as ANSP/FAPESP in Brazil, KAIST in Korea, UKERNA in the UK, and the Starlight international peering point in Chicago. The international connections included the LHCNet OC-192 link between Chicago and CERN at Geneva, the CHEPREO OC-48 link between Abilene (Atlanta), Florida International University in Miami, and São Paulo, as well as an OC-12 link between Rio de Janeiro, Madrid, Géant, and Abilene (New York). The APII-TransPAC links to Korea also were used with good occupancy. The throughputs to and from Latin America and Korea represented a significant step up in scale, which the team members hope will be the beginning of a trend toward the widespread use of 10 Gbps-scale network links on DWDM optical networks interlinking different world regions in support of science by the time the LHC begins operation in 2007. The demonstration and the developments leading up to it were made possible through the strong support of the U.S. Department of Energy and the National Science Foundation, in cooperation with the agencies of the international partners.

As part of the demonstration, a distributed analysis of simulated LHC physics data was done using the Grid-enabled Analysis Environment (GAE), developed at Caltech for the LHC and many other major particle physics experiments, as part of the Particle Physics Data Grid, the Grid Physics Network and the International Virtual Data Grid Laboratory (GriPhyN/iVDGL), and Open Science Grid projects. This involved the transfer of data to CERN, Florida, Fermilab, Caltech, UC San Diego, and Brazil for processing by clusters of computers, and finally aggregating the results back to the show floor to create a dynamic visual display of quantities of interest to the physicists. In another part of the demonstration, file servers at the SLAC/FNAL booth in London and Manchester also were used for disk-to-disk transfers from Pittsburgh to England. This gave physicists valuable experience in the use of the large, distributed datasets and to the computational resources connected by fast networks, on the scale required at the start of the LHC physics program.

The team used the MonALISA (MONitoring Agents using a Large Integrated Services Architecture) system developed at Caltech to monitor and display the real-time data for all the network links used in the demonstration. MonALISA (http://monalisa.caltech.edu) is a highly scalable set of autonomous, self-describing, agent-based subsystems which are able to collaborate and cooperate in performing a wide range of monitoring tasks for networks and grid systems as well as the scientific applications themselves. Detailed results for the network traffic on all the links used are available at http://boson.cacr.caltech.edu:8888/.

Multi-gigabit/second end-to-end network performance will lead to new models for how research and business is performed. Scientists will be empowered to form virtual organizations on a planetary scale, sharing in a flexible way their collective computing and data resources. In particular, this is vital for projects on the frontiers of science and engineering, in "data intensive" fields such as particle physics, astronomy, bioinformatics, global climate modeling, geosciences, fusion, and neutron science.

Harvey Newman, professor of physics at Caltech and head of the team, said, "This is a breakthrough for the development of global networks and grids, as well as inter-regional cooperation in science projects at the high-energy frontier. We demonstrated that multiple links of various bandwidths, up to the 10 gigabit-per-second range, can be used effectively over long distances.

"This is a common theme that will drive many fields of data-intensive science, where the network needs are foreseen to rise from tens of gigabits per second to the terabit-per-second range within the next five to 10 years," Newman continued. "In a broader sense, this demonstration paves the way for more flexible, efficient sharing of data and collaborative work by scientists in many countries, which could be a key factor enabling the next round of physics discoveries at the high energy frontier. There are also profound implications for how we could integrate information sharing and on-demand audiovisual collaboration in our daily lives, with a scale and quality previously unimaginable."

Les Cottrell, assistant director of SLAC's computer services, said: "The smooth interworking of 10GE interfaces from multiple vendors, the ability to successfully fill 10 gigabit-per-second paths both on local area networks (LANs), cross-country and intercontinentally, the ability to transmit greater than 10Gbits/second from a single host, and the ability of TCP offload engines (TOE) to reduce CPU utilization, all illustrate the emerging maturity of the 10Gigabit/second Ethernet market. The current limitations are not in the network but rather in the servers at the ends of the links, and their buses."

Further technical information about the demonstration may be found at http://ultralight.caltech.edu/sc2004 and http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2004/hiperf.html A longer version of the release including information on the participating organizations may be found at http://ultralight.caltech.edu/sc2004/BandwidthRecord

 

Pages

Subscribe to RSS - research_news