New Caltech Center Receives $8 Million for Research on New Types of Optical Devices

PASADENA, Calif.—The Defense Advanced Research Projects Agency (DARPA) has awarded an $8 million, four-year, basic-research program grant to the California Institute of Technology to initiate research in photonics technologies. The technical focus of the effort will be on optofluidics, an exciting new research area based on the use of microfluidic devices to control optical processes, and which is expected to result in a new generation of small-scale, highly adaptable, and innovative optical devices.

To conduct the research, Caltech is establishing a new center called the Center for Optofluidic Integration. The center will spearhead efforts directed toward a new class of adaptive optical devices for numerous applications in sensing, switching, and communications.

According to founding director Demetri Psaltis, the DARPA-funded center is ideally located at Caltech because the Institute has a longstanding commitment to interdisciplinary research, faculty interaction, and the creation of new technologies and avenues of knowledge. The center will also draw on the efforts of researchers at other institutions, including Harvard University and UC San Diego.

"The basic idea of the center is to build optical devices for imaging, fiber optics, communications, and other applications, and to transcend the limitations of optical devices made out of traditional materials like glass," explains Psaltis, who is the Myers Professor of Electrical Engineering and an expert in advanced optical devices. "A glass lens, for example, is relatively unchangeable optically. Our idea is to use fluidics as a means of modifying optics."

This can be accomplished, Psaltis says, by taking advantage of recent advances at Caltech, Harvard, and UC San Diego in microfluidics, soft lithography, and nanophotonics. The fusion of these three technologies will be the key to developing components that use nanometer-sized fluidic pathways to mix and pump liquids into and out of the optical path.

Among other advantages, this approach allows for the construction of devices with optical properties that can be altered very quickly. The potential products of this line of research include adaptive graded index optics, dye lasers on silicon chips, nanostructured optical memories, dynamic nonlinear optical devices, reconfigurable optical switches, and ultrasensitive molecular detectors. Optofluidics is expected to have a broad impact on areas such as telecommunications, biophotonics and biomedical engineering, and robot and machine vision.

The new center will function as a catalyst to facilitate the technology fusion process. One of the more noticeable effects of the center on the Caltech campus will be the creation of a microfluidics foundry to create optofluidic technologies. In the foundry, researchers will be able to easily design and rapidly create the microfluidic layers that will control the flow of liquids to these new devices.

According to Psaltis, the initial members of the center's research team all offer significant expertise in areas critical to the design and fabrication of highly integrated optofluidic devices. Others at Caltech include Stephen Quake, the Everhart Professor of Applied Physics and Physics, who has invented a number of microfluidic devices for biomedicine applications; Kerry Vahala, the Jenkins Professor of Information Science and Technology and a professor of applied physics, who is the inventor of optical devices such as high-quality optical microcavities; Axel Scherer, the Neches Professor of Electrical Engineering, Applied Physics, and Physics, who is best known for his work on photonic band gap devices, and who collaborated with Psaltis on the successful development of the first photonic crystal laser tunable by fluid insertion; Changhuei Yang, an assistant professor of electrical engineering and expert in biophotonics; and Oskar Painter, an assistant professor of applied physics with a background in photonic crystal lasers. Researchers at other institutions include George Whitesides, the Woodford L. and Ann A. Flowers University Professor at Harvard, who is a pioneer in soft lithography; Federico Capasso, the Robert L. Wallace Professor of Applied Physics at Harvard, who developed quantum cascade lasers; and Shaya Fainman, a professor of electrical and computer engineering at UC San Diego, whose expertise is in nanophotonics.

Writer: 
Robert Tindol
Writer: 

New Class of Reagents Developed by Caltech Chemical Biologists for In Vivo Protein Tracking

PASADENA, Calif.--One of the big problems in biology is keeping track of the proteins a cell makes, without having to kill the cell. Now, researchers from the California Institute of Technology have developed a general approach that measures protein production in living cells.

Reporting in the July 26 issue of the journal Chemistry and Biology, Caltech chemistry professor Richard Roberts and his collaborators describe their new method for examining "protein expression in vivo that does not require transfection, radiolabeling, or the prior choice of a candidate gene." According to Roberts, this work should have great impact on both cell biology and the new field of proteomics, which is the study of all the proteins that act in living systems.

"This work is a result of chemical biology—chemists, and biologists working together to gain new insights into a huge variety of applications, including cancer research and drug discovery," says Roberts.

"Generally, there is a lack of methods to determine if proteins are made in response to some cellular stimuli and what those specific proteins are," Roberts says. "These are two absolutely critical questions, because the behavior of a living cell is due to the cast of protein characters that the cell makes."

Facing this problem, the Roberts team tried to envision new methods that would enable them to decipher both how much and what particular protein a cell chooses to make at any given time. They devised a plan to trick the normal cellular machinery into labeling each newly made protein with a fluorescent tag.

The result is that cells actively making protein glow brightly on a microscope slide, much like a luminescent Frisbee on a dark summer night. Importantly, these tools can also be used to determine which particular protein is being made, in much the same way that a bar code identifies items at a supermarket checkout stand.

To demonstrate this method, the team used mouse white blood cells that are very similar to cells in the human immune system. These cells could be tagged to glow various colors, and the tagged proteins later separated for identification.

Over the next decade, scientists hope to better understand the 30,000 to 40,000 different proteins inside human cells. The authors say they are hopeful that this new approach will provide critical information for achieving that goal.

The title of the paper is "A General Approach to Detect Protein Expression In Vivo Using Fluorescent Puromycin Conjugates." For more information, contact Heidi Hardman at hhardman@cell.com.

Writer: 
Robert Tindol
Writer: 

San Andreas Earthquakes Have Almost Always Been Big Ones, Paleoseismologists Discover

PASADENA, Calif.—A common-sense notion among many Californians is that frequent small earthquakes allow a fault to slowly relieve accumulating strain, thereby making large earthquakes less likely. New research suggests that this is not the case for a long stretch of the San Andreas fault in Southern California.

In a study appearing in the current issue of the journal Geology, researchers report that about 95 percent of the slippage at a site on the San Andreas fault northwest of Los Angeles occurs in big earthquakes. By literally digging into the fault to look for information about earthquakes of the past couple of millennia, the researchers have found that most of the motion along this stretch of the San Andreas fault occurs during rare but large earthquakes.

"So much for any notion that the section of the San Andreas nearest Los Angeles might relieve its stored strains by a flurry of hundreds of small earthquakes!" said Kerry Sieh, a geology professor at the California Institute of Technology and one of the authors of the paper.

Sieh pioneered the field of paleoseismology years ago as a means of understanding past large earthquakes. His former student, Jing Liu, now a postdoctoral fellow in Paris, is the senior author of the paper.

In this particular study, Liu, Sieh, and their colleagues cut trenches parallel and perpendicular to the San Andreas fault at a site 200 kilometers (120 miles) northwest of Los Angeles, between Bakersfield and the coast. The trenches allowed them to follow the subsurface paths of small gullies buried by sediment over the past many hundreds of years. They found that the fault had offset the youngest channel by nearly 8 meters, and related this to the great (M 7.9) earthquake of 1857. Older gullies were offset progressively more by the fault, up to 36 meters. By subtracting each younger offset from the next older one, the geologists were able to recover the amount of slip in each of the past 6 earthquakes.

Of the six offsets discovered in the excavations, three and perhaps four were offsets of 7.5 to 8 meters, similar in size to the offset during the great earthquake of 1857. The third and fourth events, however, were slips of just 1.4 and 5.2 meters. Offsets of several meters are common when the rupture length is very long and the earthquake is very large. For example, the earthquake of 1857 had a rupture length of about 360 kilometers (225 miles), extending from near Parkfield to Cajon Pass. So, the five events that created offsets measuring between 5.2 and 8 meters likely represent earthquakes that had very long ruptures and magnitudes ranging from 7.5 to 8. Taken together, these five major ruptures of this portion of the San Andreas fault account for 95 percent of all the slippage that occurred there over the past thousand years or so.

The practical significance of the study is that earthquakes along the San Andreas, though infrequent, tend to be very large. Years ago, paleoseismic research showed that along the section of the fault nearest Los Angeles the average period between large earthquakes is just 130 years. Ominously, 147 years have already passed since the latest large rupture, in 1857.

The other authors of the paper are Charles Rubin, of the department of geological sciences at Central Washington University in Ellensburg, and Yann Klinger, of the Institut de Physique du Globe de Paris, France. Additional information about the site, including a virtual field trip, can be found at http://www.scec.org/wallacecreek/.

Writer: 
Robert Tindol
Writer: 

Neuroscientists Demonstrate New Way to Control Prosthetic Device with Brain Signals

PASADENA, Calif.—Another milestone has been achieved in the quest to create prosthetic devices operated by brain activity. In the July 9 issue of the journal Science, California Institute of Technology neuroscientists Sam Musallam, Brian Corneil, Bradley Greger, Hans Scherberger, and Richard Andersen report on the Andersen lab's success in getting monkeys to move the cursor on a computer screen by merely thinking about a goal they would like to achieve, and assigning a value to the goal.

The research holds significant promise for neural prosthetic devices, Andersen says, because the "goal signals" from the brain will permit paralyzed patients to operate computers, robots, motorized wheelchairs—and perhaps someday even automobiles. The "value signals" complement the goal signals by allowing the paralyzed patients' preferences and motivations to be monitored continuously.

According to Musallam, the work is exciting "because it shows that a variety of thoughts can be recorded and used to control an interface between the brain and a machine."

The Andersen lab's new approach departs from earlier work on the neural control of prosthetic devices in that most previous results have relied on signals from the motor cortex of the brain used for controlling the limb. Andersen says the new study demonstrates that higher-level signals, also referred to as cognitive signals, emanating from the posterior parietal cortex and the high-level premotor cortex (both involved in higher brain functions related to movement planning), can be decoded for control of prosthetic devices.

The study involved three monkeys that were each trained to operate a computer cursor by merely "thinking about it," Andersen explains. "We have him think about positioning a cursor at a particular goal location on a computer screen, and then decode his thoughts. He thinks about reaching there, but doesn't actually reach, and if he thinks about it accurately, he's rewarded."

Combined with the goal task, the monkey is also told what reward to expect for correctly performing the task. Examples of variation in the reward are the type of juice, the size of the reward, and how often it can be given, Andersen says. The researchers are able to predict what each monkey expects to get if he thinks about the task in the correct way. The monkey's expectation of the value of the reward provides a signal that can be employed in the control of neural prosthetics.

This type of signal processing may have great value in the operation of prosthetic devices because, once the patient's goals are decoded, then the devices' computational system can perform the lower-level calculations needed to run the devices. In other words, a "smart robot" that was provided a goal signal from the brain of a patient could use this signal to trigger the calculation of trajectory signals for movement to be accomplished.

Since the brain signals are high-level and abstract, they are versatile and can be used to operate a number of devices. As for the value signals, Andersen says these might be useful in the continuous monitoring of the patients to know their preferences and moods much more effectively than currently possible.

"These signals could also be rapidly adjusted by changing parameters of the task to expedite the learning that patients must do in order to use an external device," Andersen says. "The result suggests that a large variety of cognitive signals could be interpreted, which could lead, for instance, to voice devices that operate by the patients' merely thinking about the words they want to speak."

Andersen is the Boswell Professor of Neuroscience at Caltech. Musallam and Greger are both postdoctoral fellows in biology at Caltech; Corneil is a former researcher in Andersen's lab who is now at the University of Western Ontario; and Scherberger, a former Caltech researcher, is now at the Institute of Neuroinformatics in Zurich, Switzerland.

Writer: 
Robert Tindol
Writer: 

"Minis" Have Mega Impact in the Brain

Embargoed: Not for Release Until 11:00 a.m. PDT Thursday, 24 June, 2004

PASADENA, Calif. — The brain is a maddeningly complex organ for scientists to understand. No assumption can remain unchallenged, no given taken as a given.

Take "minis" for example. That is, miniature excitatory synaptic events. The location where neurons communicate with each other is the synapse, the tiny gap between the ends of nerve fibers. That's where one nerve cell signals another by secreting special chemicals called neurotransmitters, which jump the gap. The synapse, and its ability to strengthen and wane, is thought to be at the heart of learning and memory. Minis, mere single, tiny packets of neurotransmitters, were always thought to have no biological significance, nothing more than "noise," or background chatter that played no role in the formation of a memory. Minis, it was thought, could be safely ignored.

Maybe not, says Mike Sutton, a postdoctoral scholar in the lab of Erin Schuman, an associate professor of biology at the California Institute of Technology, and an associate investigator for the Howard Hughes Medical Institute. Sutton, Schuman, and colleagues Nicholas Wall and Girish Aakalu report that on the contrary, minis may play an important role in regulating protein synthesis in the brain. Further, their work suggests the brain is a much more sensitive organ than originally perceived, sensitive to the tiniest of chemical signals. Their report appears in the June 25th issue of the journal Science.

Originally, Sutton et. al. weren't looking at minis at all, but at protein synthesis, the process through which cells assemble amino acids into proteins according to the genetic information contained within that cell's DNA. Proteins are the body's workhorses, and are required for the structure, function, and regulation of cells, tissues, and organs. Every protein has a unique function.

A neuron is composed of treelike branches that extend from the cell body. Numerous branches called dendrites contain numerous synapses that receive signals, while another single branch called an axon passes the signal on to another cell.

The original rationale behind the experiment was to examine how changes in synaptic activity regulate protein synthesis in a dendrite, says Sutton. His first experiment was a starting point to ask what happens when we first remove all types of activity from a cell, so he could then add it back later incrementally and observe how this affected protein synthesis in dendrites. "So we were going on the assumption that the spontaneous glutamate release--the minis--would have no impact, but we wanted to formally rule this out," he says.

Using several different drugs, Sutton first blocked any so-called action potentials, an electrical signal in the sending cell that causes the release of the neurotransmitter glutamate. Normally, a cell receives hundreds of signals each second. When action potentials are blocked, it receives only minis that arrive at about one signal each second. Next he blocked both the action potential and the release of any minis. "To our surprise, the presence or absence of minis had a very large impact on protein synthesis in dendrites," he says. It turned out that the minis inhibit protein synthesis, which increased when the minis were blocked. Further, says Sutton, "it appears the changes in synaptic activity that are needed to alter protein synthesis in dendrites are extremely small--a single package of glutamate is sufficient."

Sutton notes that it is widely accepted that synaptic transmission involves the release of glutamate packets. That is, an individual packet (called a vesicle) represents the elemental unit of synaptic communication. "This is known as the 'quantal' nature of synaptic transmission," he says, "and each packet is referred to as a quantum. The study demonstrates, then, the surprising point that protein synthesis in dendrites is extremely sensitive to changes in synaptic activity even when those changes represent a single neurotransmitter quantum.

"Because it's so sensitive," says Sutton, "there is the possibility that minis provide information about the characteristics of a given synapse (for example, is the signal big or small?), and that the postsynaptic or receiving cell might use this information to change the composition of that synapse. And it does this by changing the complement of proteins that are locally synthesized."

The ability to rapidly make more or fewer proteins at a synaptic site allows for quick changes in synaptic strength. Ultimately, he says, this ability may underlie long-term memory storage.

"It's amazing to us that these signals, long regarded by many as synaptic 'noise,' have such a dramatic impact on protein synthesis," says Schuman. "We're excited by the possibility that minis can change the local synaptic landscape. Figuring out the nature of the intracellular 'sensor' for these tiny events is now the big question."

Writer: 
MW
Exclude from News Hub: 
No

Unexpected Changes in Earth's Climate Observed on the Dark Side of the Moon

PASADENA, Calif.—Scientists who monitor Earth's reflectance by measuring the moon's "earthshine" have observed unexpectedly large climate fluctuations during the past two decades. By combining eight years of earthshine data with nearly twenty years of partially overlapping satellite cloud data, they have found a gradual decline in Earth's reflectance that became sharper in the last part of the 1990s, perhaps associated with the accelerated global warming in recent years. Surprisingly, the declining reflectance reversed completely in the past three years. Such changes, which are not understood, seem to be a natural variability of Earth's clouds.

The May 28, 2004, issue of the journal Science examines the phenomenon in an article, "Changes in Earth's Reflectance Over the Past Two Decades," written by Enric Palle, Philip R. Goode, Pilar Montañes Rodríguez, and Steven E. Koonin. Goode is distinguished professor of physics at the New Jersey Institute of Technology (NJIT), Palle and Montañes Rodríguez are postdoctoral associates at that institution, and Koonin is professor of theoretical physics at the California Institute of Technology. The observations were conducted at the Big Bear Solar Observatory (BBSO) in California, which NJIT has operated since 1997 with Goode as its director. The National Aeronautics Space Administration funded these observations.

The team has revived and modernized an old method of determining Earth's reflectance, or albedo, by observing earthshine, sunlight reflected by the Earth that can be seen as a ghostly glow of the moon's "dark side"—or the portion of the lunar disk not lit by the sun. As Koonin realized some 14 years ago, such observations can be a powerful tool for long-term climate monitoring. "The cloudier the Earth, the brighter the earthshine, and changing cloud cover is an important element of changing climate," he said.

Precision earthshine observations to determine global reflectivity have been under way at BBSO since 1994, with regular observations commencing in late 1997.

"Using a phenomenon first explained by Leonardo DaVinci, we can precisely measure global climate change and find a surprising story of clouds. Our method has the advantage of being very precise because the bright lunar crescent serves as a standard against which to monitor earthshine, and light reflected by large portions of Earth can be observed simultaneously," said Goode. "It is also inexpensive, requiring only a small telescope and a relatively simple electronic detector."

By using a combination of earthshine observations and satellite data on cloud cover, the earthshine team has determined the following:

= Earth's average albedo is not constant from one year to the next; it also changes over decadal timescales. The computer models currently used to study the climate system do not show such large decadal-scale variability of the albedo.

= The annual average albedo declined very gradually from 1985 to 1995, and then declined sharply in 1995 and 1996. These observed declines are broadly consistent with previously known satellite measures of cloud amount.

= The low albedo during 1997-2001 increased solar heating of the globe at a rate more than twice that expected from a doubling of atmospheric carbon dioxide. This "dimming" of Earth, as it would be seen from space, is perhaps connected with the recent accelerated increase in mean global surface temperatures.

= 2001-2003 saw a reversal of the albedo to pre-1995 values; this "brightening" of the Earth is most likely attributable to the effect of increased cloud cover and thickness.

These large variations, which are comparable to those in the earth's infrared (heat) radiation observed in the tropics by satellites, comprise a large influence on Earth's radiation budget.

"Our results are only part of the story, since the Earth's surface temperature is determined by a balance between sunlight that warms the planet and heat radiated back into space, which cools the planet," said Palle. "This depends upon many factors in addition to albedo, such as the amount of greenhouse gases (water vapor, carbon dioxide, methane) present in the atmosphere. But these new data emphasize that clouds must be properly accounted for and illustrate that we still lack the detailed understanding of our climate system necessary to model future changes with confidence." Goode says the earthshine observations will continue for the next decade. "These will be important for monitoring ongoing changes in Earth's climate system. It will also be essential to correlate our results with satellite data as they become available, particularly for the most recent years, to form a consistent description of the changing albedo. Earthshine observations through an 11-year solar cycle will also be important to assessing hypothesized influences of solar activity on climate."

Montañes Rodríguez says that to carry out future observations, the team is working to establish a global network of observing stations. "These would allow continuous monitoring of the albedo during much of each lunar month and would also compensate for local weather conditions that sometimes prevent observations from a given site." BBSO observations are currently being supplemented with others from the Crimea in the Ukraine, and there will soon be observations from Yunnan in China, as well. A further improvement will be to fully automate the current manual observations. A prototype robotic telescope is being constructed and the team is seeking funds to construct, calibrate, and deploy a network of eight around the globe.

"Even as the scientific community acknowledges the likelihood of human impacts on climate, it must better document and understand climate changes," said Koonin. "Our ongoing earthshine measurements will be an important part of that process."

Writer: 
Robert Tindol
Writer: 

The Brain Can Make Errors in Reassembling the Color and Motion of Objects

PASADENA, Calif.—You're driving along in your car and catch a glimpse of a green SUV out of the corner of your eye. A few seconds later, you glance over, and to your surprise discover that the SUV is actually brown.

You may assume this is just your memory playing tricks on you, but new research from psychophysicists at the California Institute of Technology and the Helmholtz Institute in the Netherlands suggests that initial perceptions themselves can contain misassigned colors. This can happen in certain cases where the brain uses what it sees in the center of vision and then rearranges the colors in peripheral vision to match.

In an article appearing in this week's journal Nature, Caltech graduate student Daw-An Wu, Caltech professor of biology Shinsuke Shimojo, and Ryota Kanai of the Helmholtz Institute report that the color of an object can be misassigned even as observers are intently watching an ongoing event because of the way the brain combines the perceptions of motion and color. Because different parts of the brain are responsible for dealing with motion and color perception, mistakes in "binding" can occur, where the motion from one object is combined with the color of another object.

This is demonstrated when observers gaze steadily at a computer screen on which red and green dots are in upward and downward motion. In the center area of the screen, all the red dots are moving upward while all the green dots are moving downward.

Unknown to the observers, however, the researchers are able to control the motion of the red and green dots at the periphery of the screen. In other words, the red and green dots are moving in a certain direction in the center area of the screen, but their motion is partially or even wholly reversed on each side.

The observers show a significant tendency to mistake the motion of the red and green dots at the periphery. Even when the motion was completely reversed on the sides, the observers would see the same motion all across the screen.

According to Wu, the lead author of the paper, the design of the experiment exploits the fact that different parts of the brain are responsible for processing different visual features, such as motion and color. Further, the experiment shows that the brain can be tricked into binding the information back together incorrectly.

"This illusion confirms the existence of the binding problem the brain faces in integrating basic visual features of objects, " says Wu. "Here, the information is reintegrated incorrectly because the information in the center, where our vision is strongest, vetoes contradicting (but correct) information in the periphery."

The title of the article is "Steady-State Misbinding of Color and Motion."

 

 

Writer: 
Robert Tindol
Writer: 

Physicists Successful in Trapping Ultracold Neutrons at Los Alamos National Laboratory

PASADENA, Calif.—Free neutrons are usually pretty speedy customers, buzzing along at a significant fraction of the speed of light. But physicists have created a new process to slow neutrons down to about 15 miles per hour—the pace of a world-class mile runner—which could lead to breakthroughs in understanding the physical universe at its most fundamental level.

According to Brad Filippone, a physics professor at the California Institute of Technology, he and a group of colleagues from Caltech and several other institutions recently succeeded in collecting record-breaking numbers of ultracold neutrons at the Los Alamos Neutron Science Center. The new technique resulted in about 140 neutrons per cubic centimeter, and the number could be five times higher with additional tweaking of the apparatus.

"Our principal interest is in making precision measurements of fundamental neutron properties," says Filippone, explaining that a neutron has a half-life of only 15 minutes. In other words, if a thousand neutrons are trapped, five hundred will have broken down after 15 minutes into a proton, electron, and antineutrino.

Neutrons normally exist in nature in a much more stable state within the nuclei of atoms, joining the positively charged protons to make up most of the atom's mass. Neutrons become quite unstable if they are stripped from the nucleus, but the very fact that they decay so quickly can make them useful for various experiments.

The traditional way physicists obtained free neutrons was by trying to slow them down as they emerged from a nuclear reactor, making them bounce around in material to get rid of energy. This procedure worked fine for slowing down neutrons to a few feet per second, but that's still pretty fast. The new technique at Los Alamos National Laboratory involves a second stage of slowdown that is impractical near a nuclear reactor, but which works well at a nuclear accelerator where the event producing the neutrons is abrupt rather than ongoing. The process begins with smashing protons from the accelerator into a solid material like tungsten, which results in neutrons being knocked out of their nuclei.

The neutrons are then slowed down as they bounce around in a nearby plastic material, and then some of them are slowed much further if they happen to enter a birthday-cake-sized block of solid deuterium (or "heavy hydrogen") that has been cooled down to a temperature a few degrees above absolute zero.

When the neutrons enter the crystal latticework of the deuterium block, they can lose virtually all their energy, and emerge from the block at speeds so slow they can no longer zip right through the walls of the apparatus. The trapped ultracold neutrons bounce along the nickel walls of the apparatus and eventually emerge, where they can be collected for use in a separate experiment. According to Filippone, the extremely slow speeds of the neutrons are important in studying their decays at a minute level of detail. The fundamental theory of particle physics known as the Standard Model predicts a specific pattern in the neutron's decay, but if the ultracold neutron experiments were to reveal slightly different behavior, then physicists would have evidence of a new type of physics, such as supersymmetry. Future experiments could also exploit an inherent quantum limit of the ultracold neutrons to bounce no lower than about 15 microns on a flat surface—or about a fifth the width of a human hair. With a cleverly designed experiment, Filippone says, this limit could lead to better knowledge of gravitational interactions at very small distances. The next step for the experimenters is to return to Los Alamos in October. Then, they will use the ultracold neutrons to study the neutrons themselves. The research was supported by about $1 million funding from Caltech and the National Science Foundation.

Writer: 
RT

Researchers demonstrate existenceof earthquake supershear phenomenon

PASADENA, Calif.--As if folks living in earthquake country didn't already have enough to worry about, scientists have now identified another rupture phenomenon that can occur during certain types of large earthquakes. The only question now is whether the phenomenon is good, bad, or neutral in terms of human impact.

Reporting in the March 19 issue of the journal Science, California Institute of Technology geophysics graduate student Kaiwen Xia, aeronautics and mechanical engineering professor Ares Rosakis, and geophysics professor Hiroo Kanamori have demonstrated for the first time that a very fast, spontaneously generated rupture known as "supershear" can take place on large strike-slip faults like the San Andreas. They base their claims on a laboratory experiment designed to simulate a fault rupture.

While calculations dating back to the 1970s have predicted that such supershear rupture phenomena may occur in earthquakes, seismologists only recently began assuming that supershear was real. The Caltech experiment is the first time that spontaneous supershear rupture has been conclusively identified in a controlled laboratory environment, demonstrating that super-shear fault rupture is a very real possibility rather than a mere theoretical construct.

In the lab, the researchers forced two plates of a special polymer material together under pressure and then initiated an "earthquake" by inserting a tiny wire into the interface, which is turned into an expanding plasma by the sudden discharge of an electrical pulse. By means of high-speed photography and laser light, the researchers photographed the rupture and the stress waves as they propagated through the material.

The data shows that, under the right conditions, the rupture propagates much faster than the shear speed in the plates, producing a shock-wave pattern, something like the Mach cone of a jet fighter breaking the sound barrier.

The split-second photography also shows that such ruptures may travel at about twice the rate that a rupture normally propagates along an earthquake fault. However, the ruptures do not reach supershear speeds until they have propagated a certain distance from the point where they originated. Based on the experiments, a theoretical model was developed by the researchers to predict the length of travel before the transition to supershear.

In the case of a strike-slip fault like the San Andreas, the lab results indicate that the rupture needs to rip along for about 100 kilometers and the magnitude must be about 7.5 or so before the rupture becomes supershear. Large earthquakes along the San Andreas tend to be at least this large if not larger, typically involving rupture lengths of about 300 to 400 kilometers.

"Judging from the experimental result, it would not be surprising if supershear rupture propagation occurs for large earthquakes on the San Andreas fault," said Kanamori.

Similar high-speed ruptures propagating along bimaterial interfaces in engineering composite materials have been experimentally observed in the past (by Rosakis and his group, reporting in an August 1999 issue of Science). These ruptures took place under impact loading; only in the current experiment have they been initiated in an earthquake-like set-up.

According to Rosakis, an expert in crack propagation, the new results show promise in using engineering techniques to better understand the physics of earthquakes and its human impact.

According to Kanamori, the human impact of the finding is still debatable. The most damaging effect of a strike-slip earthquake is believed to be caused by a pulse-like motion normal to the fault caused by the combined effect of the rupture and shear wave. The supershear rupture suppresses this pulse, which is good, but the persistent shock-wave (Mach wave) emitted by the supershear rupture enhances the fault-parallel component of motion (the ground motion that runs in the same direction that the plates slip) and could amplify the destructive power of ground motion, which is bad.

The outstanding question about supershear at this point is which of these two effects dominates. "This is still being debated," says Kanamori. "We're not committed to one view or the other." Only further laboratory-level experimentation can answer this question conclusively.

Several seismologists believe that supershear was exhibited in some large earthquakes, including those that occurred in Tibet in 2001 and in Alaska in 2002. Both earthquakes were located in a remote region and had little, if any, human impact, but analysis of the evidence shows that the fault rupture propagated much faster than would normally be expected, thus implying supershear.

Writer: 
Robert Tindol
Writer: 

Most distant object in solar system discovered; could be part of never-before-seen Oort cloud

PASADENA, Calif.--A planetoid more than eight billion miles from Earth has been discovered by researchers led by a scientist at the California Institute of Technology. The new planetoid is more than three times the distance of Pluto, making it by far the most distant body known to orbit the sun.

The planetoid is well beyond the recently discovered Kuiper belt and is likely the first detection of the long-hypothesized Oort cloud. With a size approximately three-quarters that of Pluto, it is very likely the largest object found in the solar system since the discovery of Pluto in 1930.

At this extreme distance from the sun, very little sunlight reaches the planetoid and the temperature never rises above a frigid 400 degrees below zero Farenheit, making it the coldest known location in the solar system. According to Mike Brown, Caltech associate professor of planetary astronomy and leader of the research team, "the sun appears so small from that distance that you could completely block it out with the head of a pin."

As cold as it is now, the planetoid is usually even colder. It approaches the sun this closely only briefly during the 10,500 years it takes to revolve around the sun. At its most distant, it is 84 billion miles from the sun (900 times Earth's distance from the sun), and the temperature plummets to just 20 degrees above absolute zero.

The discoverers---Brown and his colleagues Chad Trujillo of the Gemini Observatory and David Rabinowitz of Yale University--have proposed that the frigid planetoid be named "Sedna," after the Inuit goddess who created the sea creatures of the Arctic. Sedna is thought to live in an icy cave at the bottom of the ocean--an appropriate spot for the namesake of the coldest body known in the solar system.

The researchers found the planetoid on the night of November 14, 2003, using the 48-inch Samuel Oschin Telescope at Caltech's Palomar Observatory east of San Diego. Within days, the new planetoid was being observed on telescopes in Chile, Spain, Arizona, and Hawaii; and soon after, NASA's new Spitzer Space Telescope was trained on the distant object.

The Spitzer images indicate that the planetoid is no more than 1,700 kilometers in diameter, making it smaller than Pluto. But Brown, using a combination of all of the data, estimates that the size is likely about halfway between that of Pluto and that of Quaoar, the planetoid discovered by the same team in 2002 that was previously the largest known body beyond Pluto.

The extremely elliptical orbit of Sedna is unlike anything previously seen by astronomers, but it resembles in key ways the orbits of objects in a cloud surrounding the sun predicted 54 years ago by Dutch astronomer Jan Oort to explain the existence of certain comets. This hypothetical "Oort cloud" extends halfway to the nearest star and is the repository of small icy bodies that occasionally get pulled in toward the sun and become the comets seen from Earth.

However, Sedna is much closer than expected for the Oort cloud. The Oort cloud has been predicted to begin at a distance 10 times greater even than that of Sedna. Brown believes that this "inner Oort cloud" where Sedna resides was formed by the gravitational pull of a rogue star that came close to the sun early in the history of the solar system. Brown explains that "the star would have been close enough to be brighter than the full moon and it would have been visible in the daytime sky for 20,000 years." Worse, it would have dislodged comets further out in the Oort cloud, leading to an intense comet shower, which would have wiped out any life on Earth that existed at the time.

There is still more to be learned about this newest known member of the solar system. Rabinowitz says that he has indirect evidence that there may be a moon following the planetoid on its distant travels--a possibility that is best checked with the Hubble Space Telescope--and he notes that Sedna is redder than anything known in the solar system with the exception of Mars, but no one can say why. Trujillo admits, "We still don't understand what is on the surface of this body. It is nothing like what we would have predicted or what we can currently explain."

But the astronomers are not yet worried. They can continue their studies as Sedna gets closer and brighter for the next 72 years before it begins its 10,500-year trip out to the far reaches of the solar system and back again. Brown notes, "The last time Sedna was this close to the sun, Earth was just coming out of the last the last ice age; the next time it comes back, the world might again be a completely different place."

Writer: 
Robert Tindol
Writer: 

Pages

Subscribe to RSS - research_news