Negative Refraction of Visible Light Demonstrated; Could Lead to Cloaking Devices

PASADENA, Calif.—For the first time, physicists have devised a way to make visible light travel in the opposite direction that it normally bends when passing from one material to another, like from air through water or glass. The phenomenon is known as negative refraction and could in principle be used to construct optical microscopes for imaging things as small as molecules, and even to create cloaking devices for rendering objects invisible.

In the March 22 in the online publication Science Express, California Institute of Technology applied physics researchers Henri Lezec, Jennifer Dionne, and Professor Harry Atwater, will report their success in constructing a nanofabricated photonic material that creates a negative index of refraction in the blue-green region of the visible spectrum. Lezec is a visiting associate in Atwater's Caltech lab, and Dionne is a graduate student in applied physics.

According to Lezec, the key to understanding the technology is first in understanding how light normally bends when it passes from one medium to another. If a pencil is placed in a glass of water at an angle, for example, it appears to bend upward and outward if we look into the water from a vantage point above the surface. This effect is due to the wave nature of light and the normal tendency of different materials to disperse light in different ways-in this case, the materials being the air outside the glass and the water inside it.

However, physicists have thought that, if new optical materials could be constructed at the nanoscale level in a certain way, it might be possible to make the light bend at the same angle, but in the opposite direction. In other words, the pencil angled into the water would appear to bend backward as we looked at it.

The details are complicated, but have to do with the speed of light through the material itself. Researchers in recent years have created materials with negative diffraction for microwave and infrared frequencies. These achievements have exploited the relatively long wavelengths at those frequencies--the wavelength of microwaves being a few centimeters, and that of infrared frequencies about the width of a human hair. Visible light, because its wavelength is at microscopic dimensions--about one-hundredth the width of a hair—has defeated this conventional approach.

Dionne, one of the lead authors, says that the breakthrough is made possible by the Atwater lab's work on plasmonics, an emerging field that "squeezes" light with specially designed materials to create a wave known as a plasmon. In this case, the plasmons act in a manner somewhat similar to a wave carrying ripples across the surface of a lake, carrying light along the silver-coated surface of a silicon-nitride material, and then across a nanoscale gold prism so that the light reenters the silicon-nitride layer with negative refraction.

Thus, the process is not the same as the one used for negative refraction of microwaves and infrared radiation, but it still works, says Dionne. And this discovery is particularly exciting because visible light, as its name suggests, is the wavelength associated with the world of objects we see, provided they are not too small.

"Maybe you could create a superlens that can beat the diffraction limit," says Dionne. "You might be able to see DNA and protein molecules clearly just by looking at them, without having to use a more complicated method like X-ray crystallography."

Atwater, who is the Howard Hughes Professor and professor of applied physics and materials science at Caltech, says the plasmonic technique indeed has potential for a compact "perfect lens" that could have a huge number of biomedical and other technological applications. "Once the light coming from a nearby object passes through the negative-refraction material, it would be possible to recover all the spatial information," he says, adding that the loss of this information is why there is ordinarily a limit to the size of an object that can be seen in a microscope.

Even more tantalizing is the possibility of an optical "invisibility cloak" device that would surround an object and bend light in such a way that it would be perfectly refocused on the opposite side. This would provide perfect invisibility for the object inside the cloak, in a manner similar to the cloaks used by Harry Potter or the Klingons in the old Star Trek television series.

"Of course, anyone inside the cloak would not be able to see out," Atwater says.

"But maybe you could have some small windows," Dionne adds.

The title of the paper is "Negative Refraction at Visible Frequencies." It will be available on the Science Express website at http://www.sciencexpress.org when the embargo lifts and will be published in the journal Science at a later date. To obtain advanced copies of the paper, contact the American Association for the Advancement of Science news office at (202) 326-6440 or scipak@aaas.org.

Writer: 
Robert Tindol
Writer: 

Researchers Create DNA Logic Circuit That Work in Test Tubes

PASADENA, Calif.—Computers and liquids are not very compatible, as many a careless coffee-drinking laptop owner has discovered. But a new breakthrough by researchers at the California Institute of Technology could result in future logic circuits that literally work in a test tube—or even in the human body.

In the current issue of the journal Science, a Caltech group led by computer scientist Erik Winfree reports that they have created DNA logic circuits that work in salt water, similar to an intracellular environment. Such circuits could lead to a biochemical microcontroller, of sorts, for biological cells and other complex chemical systems. The lead author of the paper is Georg Seelig, a postdoctoral scholar in Winfree's lab.

"Digital logic and water usually don't mix, but these circuits work in water because they are based on chemistry, not electronics," explains Winfree, an associate professor of computer science and computation and neural systems who is also a recipient of a MacArthur genius grant.

Rather than encoding signals in high and low voltages, the circuits encode signals in high and low concentrations of short DNA molecules. The chemical logic gates that perform the information processing are also DNA molecules, with each gate a carefully folded complex of multiple short DNA strands.

When a gate encounters the right input molecules, it releases its output molecule. This output molecule in turn can help trigger a downstream gate—so the circuit operates like a cascade of dominoes in which each falling domino topples the next one.

However, unlike dominoes and electronic circuits, components of these DNA circuits have no fixed position and cannot be simply connected by a wire. Instead, the chemistry takes place in a well-mixed solution of molecules that bump into each other at random, relying on the specificity of the designed interactions to ensure that only the right signals trigger the right gates.

"We were able to construct gates to perform all the fundamental binary logic operations—AND, OR, and NOT," explains Seelig. "These are the building blocks for constructing arbitrarily complex logic circuits."

As a demonstration, the researchers created a series of circuits, the largest one taking six inputs processed by 12 gates in a cascade five layers deep. While this is not large by the standards of Silicon Valley, Winfree says that it demonstrates several design principles that could be important for scaling up biochemical circuits.

"Biochemical circuits have been built previously, both in test tubes and in cells," Winfree says. "But the novel thing about these circuits is that their function relies solely on the properties of DNA base-pairing. No biological enzymes are necessary for their operation.

"This allows us to use a systematic and modular approach to design their logic circuits, incorporating many of the features of digital electronics," Winfree says.

Other advantages of the approach are signal restoration for the production of correct output even when noise is introduced, and standardization of the chemical-circuit signals by the use of translator gates that can use naturally occurring biological molecules, such as microRNA, as inputs. This suggests that the DNA logic circuits could be used for detecting specific cellular abnormalities, such as a certain type of cancer in a tissue sample, or even in vivo.

"The idea is not to replace electronic computers for solving math problems," Winfree says. "Compared to modern electronic circuits, these are painstakingly slow and exceedingly simple. But they could be useful for the fast-growing discipline of synthetic biology, and could help enable a new generation of technologies for embedding 'intelligence' in chemical systems for biomedical applications and bionanotechnology."

The other authors of the paper are David Soloveichik and Dave Zhang, both Caltech grad students in computation and neural systems.

Writer: 
Robert Tindol
Writer: 

Microfuidic Device Used for Multigene Analysis of Individual Environmental Bacteria

PASADENA, Calif.—When it comes to digestive ability, termites have few rivals due to the gut activities that allow them to literally digest a two-by-four. But they do not digest wood by themselves—they are dependent on the 200 or so diverse microbial species that call termite guts home and are found nowhere else in nature.

Despite several successful attempts, the majority of these beneficial organisms have never been cultivated in the laboratory. This has made it difficult to determine precisely which species perform the numerous, varied functions relevant to converting woody plant biomass into a material that can be directly used as food and energy by their insect hosts.

Now, scientists using state-of-the-art microfluidic devices have come up with a new way of investigating microbial ecology. In the December 1 issue of the journal Science, California Institute of Technology associate professor of environmental microbiology Jared Leadbetter, biology graduate student Elizabeth Ottesen, and their colleagues announce a new and efficient way of revealing guild-species relationships in complex microbial communities. The approach allows them to discover connections between bacterial cells from natural samples, and the activities encoded by genes.

The results also reveal important insights into the relationship between termites and key gut microbes called spirochetes, which aid them in the process of digesting wood.

"I think these results involve two pinnacles of novelty," says Leadbetter, "What we're showing are key results relevant to the symbiosis that occurs between termites and the bacteria involved in the conversion of wood fiber into a form of energy that can be used by the insect. But we're also revealing an approach that can lead to a better understanding of the many microbial processes that underlie the environments in which we all live."

According to Leadbetter, the techniques of gene amplification, cloning, and sequencing developed over the past two decades have already revolutionized microbial ecology. As a result, we now have a much greater appreciation of the vast diversity of microbial species occurring in nature, as well as the diversity of genes involved with processes that we know are mediated by as-yet unstudied microbes in the environment.

However, researchers have had difficulty in determining which subset of the species that have been inventoried actually encode these various key genes. The biggest problem has been the practice of extracting as one mass the composite genetic information of an entire, complex sample. This destroys the individual cells that are the source of the information, thus mixing together that which is encoded by hundreds if not thousands of unique species. As a result, the procedure inevitably dissolves the natural order underlying the organization of genetic information in the environment.

The approach of Leadbetter and his collaborators is to use microfluidic devices, in which thousands of individual cells harvested from the environment can be distributed into separate chambers prior to any gene-based analysis, so that each can be studied as an individual. If the cell reveals that it has a certain key gene of interest, then the researchers are also able to determine the species identity of the cell, or whether it contains other key genes of interest.

The traditional approach involves removing the gut contents of individual termites, smashing the microbial cells, than extracting and pooling their DNA as one mass, with subsequent analysis of the genes found in the randomized mash. The genes are there, but assigning relationships between any two genes or to the organisms from which they are derived is complicated at best, and often just not possible.

"We're trying to move beyond investigating the jumbled information," says Leadbetter. "In the past, trying to study a microbial environment using gene-based techniques was often like studying the contents of several hundred books in a library after first having torn off their covers, ripped up all the pages into small pieces, and jumbled them together into a big pile. We would find sentences and paragraphs that we found extremely interesting and important, but then we were left frustrated. It was very difficult to determine what was in the rest of the book.

"But with this technique, we are suddenly able to read portions of the books without having first torn off their covers. We are still reading with a narrow penlight, but certainly, when we identify a sentence of interest, we can rapidly ascertain the title and author of the book that we are reading, and even move on to examine the other pages."

In the paper, the researchers describe an analysis of a complex, species-rich microbial community that allowed two genes of interest to be colocalized to the same environmental genome. An early result analyzing thousands of individual cells harvested straight from the gut environment reveals the species identity of a group of microbes resident in the California dampwood termite (Zootermopsis) that perform a key act in the nutritional symbiosis involved in wood decay.

The good news for nonscientists is that this provides a new path to reaching a better understanding of many diverse ecosystems. It also leads to a refined appreciation of certain details underlying the activities of a destructive pest, while shedding light on a key step involved in the conversion of plant biomass into useful products. Understanding that conversion in detail is critical to achieving a current societal need-the conversion of low-value lignocellulose materials into biofuels and other commodities of greater value.

Termites are extremely abundant and active in many tropical ecosystems, so the current work could also lead to a better understanding of several processes of global environmental relevance, Leadbetter adds.

"There are 2,600 different species of termites, and it is estimated that there are at least a million billion individual termites on Earth. It is thought that they emit two and four percent of the global carbon dioxide and methane budget, respectively-both mediated directly or indirectly by their microbes," he says. "Also, by extrapolation of what we understand from numerous studies of a few dozen termites species, we think that there could be millions of unique and novel microbial species found only in the hindguts of termites."

The other authors of the paper are Stephen Quake, professor of bioengineering at Stanford, and Jong Wook Hong, an assistant professor of materials engineering at Auburn University.

Writer: 
Robert Tindol
Writer: 

Watson Lecture: Amazing Bubbles

PASADENA, Calif.--There is more to bubbles than just froth. The same phenomenon that puts foam in your latte can also reduce kidney stones or chew holes in propellers. Understanding how bubbles form and collapse has led to a variety of applications, from faster torpedoes to cleaner teeth.

"Bubbles have some amazing properties that can be both harmful and beneficial," says Christopher E. Brennen, the Richard L. and Dorothy M. Hayman Professor of Mechanical Engineering at the California Institute of Technology. "In particular, they are used in a startling number of modern medical applications, for example the remote removal of kidney stones by lithotripsy [pulverizing stones with ultrasound]."

On Wednesday, November 8, Brennen will cover the history of these phenomena (including Caltech's special role in their discovery) and will end with a vision of new horizons for the bubble. His talk, "The Amazing World of Bubbles," is the second program of the fall/winter 2006-07 Earnest C. Watson Lecture Series.

The talk will be presented at 8 p.m. in Beckman Auditorium, 332 S. Michigan Avenue, south of Del Mar Boulevard, on the Caltech campus in Pasadena. Seating is available on a free, no-ticket-required, first-come, first-served basis.

Caltech has offered the Watson Lecture Series since 1922, when it was conceived by the late Caltech physicist Earnest Watson as a way to explain science to the local community.

For more information, call (626) 395-4652. Outside the greater Pasadena area, call toll-free, 1(888) 2CALTECH (1-888-222-5832).

###

Contact: Kathy Svitil (626) 395-8022 ksvitil@caltech.edu

Visit the Caltech Media Relations website at: http://pr.caltech.edu/media

Writer: 
KS
Tags: 
Writer: 

New All-Optical Modulator Paves the Way to Ultrafast Communications and Computing

PASADENA, Calif.-- In the 1950s, a revolution began when glass and metal vacuum tubes were replaced with tiny and cheap transistors. Today, for the cost of a single vacuum tube, you can buy a computer chip with literally millions of transistors.

Today, physicists and engineers are looking to accomplish a similar shrinking act with the components of optical systems--lasers, modulators, detectors, and more--that are used to manipulate light. The goal: designing ultrafast computing and communications devices that use photons of light, instead of electrons, to transmit information and perform computations, all with unprecedented speed.

Researchers at the California Institute of Technology have now taken a significant step toward the creation of all-optical logic devices by developing a new silicon and polymer waveguide that can manipulate light signals using light, at speeds almost 100 times as fast as conventional electron-based optical modulators.

The all-optical modulator consists of a silicon waveguide, about one centimeter long and a few microns wide, that is blanketed with a novel nonlinear polymer developed at the University of Washington. As light passes through the waveguide, it is split into two signals, an input, or "gate," beam and a source beam. "We can manipulate where the source goes by turning on and off the gate," says Michael Hochberg, a postdoctoral researcher at Caltech. The modulator could be switched on and off a trillion times or more per second.

Hochberg and Tom Baehr-Jones developed the system, which is described in the September issue of the journal Nature Materials, with Caltech colleague Axel Scherer, the Neches Professor of Electrical Engineering, Applied Physics, and Physics. The optical polymers were developed in the laboratories of Larry Dalton and Alex K. Y. Jen at the University of Washington.

Because the system is silicon based, it is easily scalable. "We can add complexity through standard silicon processing," Hochberg says, which means the system "provides a path toward eventually making optical processors. Because all-optical devices are intrinsically faster, you could do computations at terahertz speeds, rather than gigahertz."

"In a few years, we hope to take a device like this and make all-optical transistors that give us signal gain-which means that you can put in a small amount of power on the gate and get out a large amount of power change on the drain, just as regular transistors do. Once we can do that, the whole world opens up," Hochberg says.

###

Contact: Kathy Svitil (626) 395-8022 ksvitil@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media

Writer: 
KS
Writer: 

Caltech Researchers Reveal Three Distinct Modes of Dynamic Friction Rupture with Implications for Earthquake Behavior

PASADENA, Calif.-A new study by researchers at the California Institute of Technology has revealed important findings about the nature of ruptures and sliding behavior, which could impact how we respond to earthquakes and other disasters.

In the modeling of earthquake ruptures, researchers have, for some time, proposed that three primary modes of rupture may occur at a faultline during an earthquake. The experimental visualization of these ruptures or sliding modes, including the "self-healing" pulse rupture, has for the first time been achieved and demonstrated in the Graduate Aeronautical Laboratories of the California Institute of Technology (GALCIT) using dynamic high-speed photoelasticity and laser vibrometry.

Ares Rosakis, Director of GALCIT and the Theodore von Karman Professor of Aeronautics and Professor of Mechanical Engineering at Caltech, says, "The discovery of these rupture failure modes has never been directly confirmed. We are the first to create the conditions in the laboratory to generate and visualize these rupture and sliding phenomena. Utilizing ultrahigh-speed optical instrumentation, we have been able to see these modes in a laboratory-which has produced some rather counterintuitive results. The results of this research could have a significant impact on understanding earthquake behavior and ruptures, could validate existing theoretical and numerical methodologies currently used in seismology, and could one day help us to potentially mitigate massive earthquake damage."

In controlled laboratory conditions, Rosakis along with his collaborators, Guruswami Ravichandran, the John E. Goode, Jr. Professor of Aeronautics and Mechanical Engineering, and George Lykotrafitis, their former PhD student and currently a postdoctoral scholar at MIT, created rupture models propagating along "incoherent" or frictional interfaces separating identical materials. Combining high-speed photography with a new technique called laser vibrometry, the team conclusively confirmed the existence of the rupture mode types, the exact point of rupture, the sliding velocity, and the rupture propagation speed. Ultrahigh-speed photography, providing up to two million photographs per second, and dynamic photoelasticity were combined with laser vibrometry, to give an accurate measurement of the sliding velocities and to reveal the various modes of sliding.

Specifically, to conduct the research, the GALCIT team compressed two sheets of Homalite, a clear polymeric material. They then shot projectiles at various velocities into one of the two sheets. The high-speed camera and the interferometers were simultaneously triggered. Two laser vibrometers measured both the horizontal and the vertical particle velocities just above and below the sliding interface, thus providing a time record of the relative sliding and opening speeds as the dynamic rupture went by with speeds in excess of 1.0Km/sec. They also controlled the parameters of impact speed, confining pressure, and surface roughness to measure the dynamic sliding.

Theoretical models have predicted that shear ruptures assume either a sliding crack rupture mode; a pulse-like mode; a wrinkle-like opening pulse mode; or a mixed rupture combination of these modes. In earthquake faulting, a sliding crack mode would occur where a large section of the interface slides behind a fast-moving rupture front and continues to slide for a long time. In the "self-healing" slip pulse mode (first proposed in the early '90s by Thomas Heaton, professor of engineering seismology and civil engineering at Caltech) the rupture will actually slide or "crawl" along the fault in a pulse-like motion, and the fault will then recompress or "self-heal" behind the pulse. In this pulse-like sliding mode, the slip is confined to a finite distance behind the propagating rupture front, while the fault behind it relocks. The third mode, the wrinkle-like opening pulse mode, is similar to the sliding pulse but would actually create a vertical opening across the fault plane followed by self-healing (something like a ripple on a carpet). Through this new laboratory research technology, these phenomena were actually seen and verified.

This research may eventually provide new insights into how we view, react, and prepare for earthquakes. "In studying seismology and the physics of earthquakes, there is no way to internally visualize the earth's crust," comments team member Ravichandran. "In the lab, we have now confirmed the existence and behaviors of crack-like sliding, self-healing pulse-like sliding, and wrinkle-like opening pulse sliding. This research can provide vital information to help determine the behavior of earthquake ruptures and sliding. Now we have a way to 'see' these behaviors, which can provide new avenues of understanding for these occurrences."

This technology also has potential applications for any composite structure containing coherent and incoherent interfaces; for example, the durability of the new generation of high- speed naval vessels that are constructed with layered structures in their hulls and are subject to dynamic wave slamming or underwater threats can be studied by such techniques. Ravichandran is the director of a newly established Multidisciplinary University Research Initiative sponsored by the Office of Naval Research at GALCIT whose purpose is the study of such phenomena as they pertain to the reliability of the new generation of naval vessels.

The findings from the GALCIT team are being published in the September 22 issue of the journal Science. The title of their article is "Self-Healing Pulse-Like Shear Ruptures in the Laboratory."

This work has been sponsored by the National Science Foundation, the U.S. Department of Energy, and the Office of Naval Research.

###

Contact: Deborah Williams-Hedges (626) 395-3227 debwms@caltech.edu

Visit the Caltech Media Relations website at: http://pr.caltech.edu/media

Writer: 
DWH
Writer: 

Caltech Researchers Announce Invention of the Optofluidic Microscope

PASADENA, Calif.—The old optical microscopes that everyone used in high-school biology class may be a step closer to the glass heap. Researchers at the California Institute of Technology have announced their invention of an optofluidic microscope that uses no lens elements and could revolutionize the diagnosis of certain diseases such as malaria.

Reporting in the journal Lab on a Chip, Caltech assistant professor of electrical engineering professor Changhuei Yang and his coauthors describe the novel device that combines chip technology with microfluidics. Although similar in resolution and magnifying power to a conventional top-quality optical microscope, the optofluidic microscope chip is only the size of a quarter, and the entire device—imaging screen and all—will be about the size of an iPod.

"This is a new way of doing microscopy," says Yang, who also has a dual appointment in bioengineering at Caltech. "Its imaging principle is similar to the way we see floaters in our eyes. If you can see it in a conventional microscope and it can flow in a microfluidic channel, we can image it with this tiny chip."

That list of target objects includes many pathogens that are most dangerous to human life and health, including the organism that causes malaria. The typical method of diagnosing malaria is to draw a blood sample and send it to a lab where the sample can be inspected for malaria parasites. A high-powered optical microscope with lens elements is far too big and cumbersome for inspection of samples in the field.

With a palm-sized optofluidic microscope, however, a doctor would be able to draw a drop of blood from the patient and analyze it immediately. This process would be much simpler and faster than the current method, and the equipment would be far cheaper and more readily available to physicians in third-world countries.

The device works by literally flowing a target sample across a tiny fluid pathway. Normally, the image would be low in resolution because the target would interrupt the light on a single pixel, thus limiting the resolution to pixel size.

However, the researchers have avoided this limitation by attaching an opaque metal film to a microfluidic chip. The film contains an etched array of submicron apertures that are spaced in such a way that adjacent line scans overlap and all parts of the target are imaged.

The new optofluidic microscope is one of the first major accomplishments to come out of Caltech's Center for Optofluidic Integration, which was begun in 2004 with funding from the federal Defense Advanced Research Projects Agency (DARPA) for development of a new generation of small-scale, highly adaptable, and innovative optical devices.

"The basic idea of the center is to build optical devices for imaging, fiber optics, communications, and other applications, and to transcend some of the limitations of optical devices made out of traditional materials like glass," says Demetri Psaltis, who is the Myers Professor of Electrical Engineering at Caltech and a coauthor of the paper. "This is probably the most important result so far showing how we can build very unique devices that can have a broad impact."

Xin Heng, a graduate student in electrical engineering at Caltech, performed most of the experiments reported in the paper. The other Caltech authors are David Erickson, a former postdoctoral scholar who is now a mechanical-engineering professor at Cornell University; L. Ryan Baugh, a postdoctoral scholar in biology; Zahid Yaqoob, a postdoctoral scholar in electrical engineering; and Paul W. Sternberg, the Morgan Professor of Biology.

Writer: 
Robert Tindol
Writer: 

Researchers Announce New Way to Assess How Buildings Would Stand Up in Big Quakes

PASADENA, Calif.—How much damage will certain steel-frame, earthquake-resistant buildings located in Southern California sustain when a large temblor strikes? It's a complicated, multifaceted question, and researchers from the California Institute of Technology, the University of California, Santa Barbara, and the University of Pau, France, have answered it with unprecedented specificity using a new modeling protocol.

The results, which involve supercomputer simulations of what could happen to specific areas of greater Los Angeles in specific earthquake scenarios, were published in the latest issue of the Bulletin of the Seismological Society of America, the premier scientific journal dedicated to earthquake research.

"This study has brought together state-of-the-art 3-D-simulation tools used in the fields of earthquake engineering and seismology to address important questions that people living in seismically active regions around the world worry about," says Swaminathan Krishnan, a postdoctoral scholar in geophysics at Caltech and lead author of the study.

"What if a large earthquake occurred on a nearby fault? Would a particular building withstand the shaking? This prototype study illustrates how, with the help of high-performance computing, 3-D simulations of earthquakes can be combined with 3-D nonlinear analyses of buildings to provide realistic answers to these questions in a quantitative manner."

The publication of the paper is an ambitious attempt by the researchers to enhance and improve the methodology used to assess building integrity, says Jeroen Tromp, the McMillan Professor of Geophysics and director of the Seismological Laboratory at Caltech. "We are trying to change the way in which seismologists and engineers approach this difficult interdisciplinary problem," Tromp says.

The research simulates the effects that two different 7.9-magnitude San Andreas earthquakes would have on two hypothetical 18-story steel frame buildings located at 636 sites on a grid that covers the Los Angeles and San Fernando basins. An earthquake of this magnitude occurred on the San Andreas on January 9, 1857, and seismologists generally agree that the fault has the potential for such an event every 200 to 300 years. To put this in context, the much smaller January 17, 1994, Northridge earthquake of 6.7 magnitude caused 57 deaths and economic losses of more than $40 billion.

The simulated earthquakes "rupture" a 290-kilometer section of the San Andreas fault between Parkfield in the Central Valley and Southern California, one earthquake with rupture propagating southward and the other with rupture propagating northward. The first building is a model of an actual 18-story, steel moment-frame building located in the San Fernando Valley. It was designed according to the 1982 Uniform Building Code (UBC) standards yet suffered significant damage in the 1994 Northridge earthquake due to fracture of the welds connecting the beams to the columns. The second building is a model of the same San Fernando Valley structure redesigned to the stricter 1997 UBC standards.

Using a high-performance PC cluster, the researchers simulated both earthquakes and the damage each would cause to the two buildings at each of the 636 grid sites. They assessed the damage to each building based on "peak interstory drift."

Interstory drift is the difference between the roof and floor displacements of any given story as the building sways during the earthquake, normalized by the story height. For example, for a 10-foot high story, an interstory drift of 0.10 indicates that the roof is displaced one foot in relation to the floor below.

The greater the drift, the greater the likelihood of damage. Peak interstory drift values larger than 0.06 indicate severe damage, while values larger than 0.025 indicate that the damage could be serious enough to pose a serious threat to human safety. Values in excess of 0.10 indicate probable building collapse.

The study's conclusions include the following:

o A 7.9-magnitude San Andreas rupture from Parkfield to Los Angeles results in greater damage to both buildings than a rupture from Los Angeles to Parkfield. This difference is due to the effects of directivity and slip distribution controlling the ground-motion intensity. In the north-to-south rupture scenario, peak ground displacement is two meters in the San Fernando Valley and one meter in the Los Angeles basin; for the south-to-north rupture scenario, ground displacements are 0.6 meters and 0.4 meters respectively. o In the north-to-south rupture scenario, peak drifts in the model of the existing building far exceed 0.10 in the San Fernando Valley, Santa Monica, and West Los Angeles, Baldwin Park and its neighboring cities, Compton and its neighboring cities, and Seal Beach and its neighboring cities. Peak drifts are in the 0.06-0.08 range in Huntington Beach, Santa Ana, Anaheim, and their neighboring cities, whereas the values are in the 0.04-0.06 range for the remaining areas, including downtown Los Angeles. o The results for the redesigned building are better than for the existing building. Although the peak drifts in some areas in the San Fernando Valley still exceed 0.10, they are in the range of 0.04-0.06 for most cities in the Los Angeles basin. o In the south-to-north rupture, the peak drifts in both the existing and redesigned building models are in the range of 0.02-0.04, suggesting that there is no significant danger of collapse. However, this is indicative of damage significant enough to warrant building closures and compromise human safety in some instances.

Such hazard analyses have numerous applications, Krishnan says. They could be performed on specific existing and proposed buildings in particular areas for a range of types of earthquakes, providing information that developers, building owners, city planners, and emergency managers could use to make better, more informed decisions.

"We have shown that these questions can be answered, and they can be answered in a very quantitative way," Krishnan says.

The research paper is "Case Studies of Damage to Tall Steel Moment-Frame Buildings in Southern California during Large San Andreas Earthquakes," by Swaminathan Krishnan, Chen Ji, Dimitiri Komatitsch, and Jeroen Tromp. Online movies of the earthquakes and building-damage simulations can be viewed at http://www.ce.caltech.edu/krishnan.

Contact: Jill Perry (626) 395-3226 jperry@caltech.edu

Writer: 
RT

Physicists Devise New Technique for Detecting Heavy Water

PASADENA, Calif.—Scientists at the California Institute of Technology have created a new method of detecting heavy water that is 30 times more sensitive than any other existing method. The detection method could be helpful in the fight against international nuclear proliferation.

In the June 15 issue of the journal Optics Letters, Caltech doctoral student Andrea Armani and her professor Kerry Vahala report that a special type of tiny optical device can be configured to detect heavy water. Called an optical microresonator, the device is shaped something like a mushroom and was originally designed three years ago to store light for future opto-electronic applications. With a diameter smaller than that of a human hair, the microresonator is made of silica and is coupled with a tunable laser.

The technique works because of the difference between the molecular composition of heavy water and regular water. An H2O molecule has two atoms of hydrogen that each are built of a single proton and a single electron. A D2O molecule, by contrast, has two atoms of a hydrogen isotope known as deuterium, which differs in that each atom has a single neutron in addition to a proton and an electron. This makes a heavy-water molecule significantly more massive than a regular water molecule.

"Heavy water isn't a misnomer," says Armani, who is finishing up her doctorate in applied physics and will soon begin a two-year postdoctoral appointment at Caltech. Armani says that heavy water looks just like regular water to the naked eye, but an ice cube made of the stuff will sink if placed in regular water because of its added density. This difference in masses, in fact, is what makes the detection of heavy water possible in Armani and Vahala's new technique. When the microresonator is placed in heavy water, the difference in optical absorption results in a change in the "Q factor," which is a number used to measure how efficiently an optical resonator stores light. If a higher Q factor is detected than one would see for normal water, then more heavy water is present than the typical one-in-6,400 water molecules that exists normally in nature.

The technique is so sensitive that one heavy-water molecule in 10,000 can be detected, Armani says. Furthermore, the Q factor changes steadily as the heavy-water concentrations are varied.

The results are good news for those who worry about the escalation of nuclear weapons, because heavy water is typically found wherever someone is trying to control a nuclear chain reaction. As a nuclear moderator, heavy water can be used to control the way neutrons bounce around in fissionable material, thereby making it possible for a fission reactor to be built.

The ongoing concern with heavy water is exemplified by the fact that Armani and Vahala have received funding for their new technique from the Defense Advanced Research Projects Agency, or DARPA. The federal agency provides grants to university research that has potential applications for U.S. national defense.

"This technique is 30 times better than the best competing detection technique, and we haven't yet tried to reduce noise sources," says Armani. "We think even greater sensitivities are possible."

The paper is entitled "Heavy Water Detection Using Ultra-High-Q Microcavities" and is available online at http://ol.osa.org/abstract.cfm?id=90020.

Writer: 
Robert Tindol
Writer: 

Jerry Marsden Elected to Royal Society

PASADENA, Calif.—Jerry Marsden, the Carl F. Braun Professor of Engineering and Control and Dynamical Systems at the California Institute of Technology, has been named a member of the Royal Society of the United Kingdom. Marsden joins 43 other scientists as the new inductees of a society that through the years has counted Isaac Newton, Charles Darwin, Albert Einstein, and Stephen Hawking among its members.

Marsden was cited by the Royal Society for "his fundamental contributions to a very wide range of topics such as Hamiltonian systems, fluid mechanics, plasma physics, general relativity, dynamical systems and chaos, nonlinear elasticity, nonholonomic mechanics, control theory, variational integrators and solar system mission design. Some of his recent research has contributed to understanding and designing NASA missions to the moons of Jupiter."

Marsden earned his bachelor's degree in applied mathematics from the University of Toronto and his doctorate in applied mathematics from Princeton University. He taught at UC Berkeley from 1968 to 1995 and then came to Caltech as a professor of control and dynamical systems, becoming the Carl F. Braun Professor in 2003. In the early 1970s, he was one of the original founders of reduction theory for mechanical systems with symmetry. Marsden received the 1990 AMS-SIAM Norbert Wiener Prize and the SIAM John von Neumann Prize in 2005. He is also a recipient of the Research Award for Natural Sciences of the Alexander von Humboldt Foundation in 1992 and 1999 and the 2000 Max Planck Research Award for Mathematics and Computer Science.

The Royal Society was established in England in 1660 and is the world's oldest scientific academy in continuous existence. The society's objectives are to recognize excellence in science, to support leading-edge scientific research and its applications, to stimulate international interaction, and to promote education and the public's understanding of science.

Writer: 
Robert Tindol
Writer: 
Exclude from News Hub: 
Yes

Pages

Subscribe to RSS - EAS