Creating Indestructible Self-Healing Circuits

Caltech engineers build electronic chips that repair themselves

PASADENA, Calif.—Imagine that the chips in your smart phone or computer could repair and defend themselves on the fly, recovering in microseconds from problems ranging from less-than-ideal battery power to total transistor failure. It might sound like the stuff of science fiction, but a team of engineers at the California Institute of Technology (Caltech), for the first time ever, has developed just such self-healing integrated chips.

The team, made up of members of the High-Speed Integrated Circuits laboratory in Caltech's Division of Engineering and Applied Science, has demonstrated this self-healing capability in tiny power amplifiers. The amplifiers are so small, in fact, that 76 of the chips—including everything they need to self-heal—could fit on a single penny. In perhaps the most dramatic of their experiments, the team destroyed various parts of their chips by zapping them multiple times with a high-power laser, and then observed as the chips automatically developed a work-around in less than a second.

"It was incredible the first time the system kicked in and healed itself. It felt like we were witnessing the next step in the evolution of integrated circuits," says Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering at Caltech. "We had literally just blasted half the amplifier and vaporized many of its components, such as transistors, and it was able to recover to nearly its ideal performance."

The team's results appear in the March issue of IEEE Transactions on Microwave Theory and Techniques.

Until now, even a single fault has often rendered an integrated-circuit chip completely useless. The Caltech engineers wanted to give integrated-circuit chips a healing ability akin to that of our own immune system—something capable of detecting and quickly responding to any number of possible assaults in order to keep the larger system working optimally. The power amplifier they devised employs a multitude of robust, on-chip sensors that monitor temperature, current, voltage, and power. The information from those sensors feeds into a custom-made application-specific integrated-circuit (ASIC) unit on the same chip, a central processor that acts as the "brain" of the system. The brain analyzes the amplifier's overall performance and determines if it needs to adjust any of the system's actuators—the changeable parts of the chip.

Interestingly, the chip's brain does not operate based on algorithms that know how to respond to every possible scenario. Instead, it draws conclusions based on the aggregate response of the sensors. "You tell the chip the results you want and let it figure out how to produce those results," says Steven Bowers, a graduate student in Hajimiri's lab and lead author of the new paper. "The challenge is that there are more than 100,000 transistors on each chip. We don't know all of the different things that might go wrong, and we don't need to. We have designed the system in a general enough way that it finds the optimum state for all of the actuators in any situation without external intervention."

Looking at 20 different chips, the team found that the amplifiers with the self-healing capability consumed about half as much power as those without, and their overall performance was much more predictable and reproducible. "We have shown that self-healing addresses four very different classes of problems," says Kaushik Dasgupta, another graduate student also working on the project. The classes of problems include static variation that is a product of variation across components; long-term aging problems that arise gradually as repeated use changes the internal properties of the system; and short-term variations that are induced by environmental conditions such as changes in load, temperature, and differences in the supply voltage; and, finally, accidental or deliberate catastrophic destruction of parts of the circuits.

The Caltech team chose to demonstrate this self-healing capability first in a power amplifier for millimeter-wave frequencies. Such high-frequency integrated chips are at the cutting edge of research and are useful for next-generation communications, imaging, sensing, and radar applications. By showing that the self-healing capability works well in such an advanced system, the researchers hope to show that the self-healing approach can be extended to virtually any other electronic system.

"Bringing this type of electronic immune system to integrated-circuit chips opens up a world of possibilities," says Hajimiri. "It is truly a shift in the way we view circuits and their ability to operate independently. They can now both diagnose and fix their own problems without any human intervention, moving one step closer to indestructible circuits."

Along with Hajimiri, Bowers, and Dasgupta, former Caltech postdoctoral scholar Kaushik Sengupta (PhD '12), who is now an assistant professor at Princeton University, is also a coauthor on the paper, "Integrated Self-Healing for mm-Wave Power Amplifiers." A preliminary report of this work won the best paper award at the 2012 IEEE Radio Frequency Integrated Circuits Symposium. The work was funded by the Defense Advanced Research Projects Agency and the Air Force Research Laboratory.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Under the Hood of the Earthquake Machine

Watson Lecture Preview

 

What makes an earthquake go off? Why are earthquakes so difficult to forecast? Professor of Mechanical Engineering and Geophysics Nadia Lapusta gives us a close-up look at the moving parts, as it were, at 8:00 p.m. on Wednesday, February 13, 2013, in Caltech's Beckman Auditorium. Admission is free.

 

Q: What do you do?

A: I study friction as it relates to earthquakes. At a depth of five miles, which is the average depth at which large earthquakes in Southern California occur, the compression on the two sides of the fault is roughly equivalent to a pressure of 1,500 atmospheres. So you can imagine that friction plays an important role. I make computational models that combine our theories about friction with laboratory studies of how materials behave. We try to reproduce what seismologists, geodesists, and geologists see actual earthquakes doing, in order to infer the physical laws that govern them.

Our planet's surface is made up of a bunch of plates that are always moving, and an earthquake happens when the locked boundaries of the plates rapidly catch up with the slow motion of the plates themselves. You get a sudden shearing—a sideways motion that generates the destructive waves that we perceive as shaking.

A number of factors affect this process. If you rub your palms together, you generate heat. An earthquake is a very intensive rubbing of palms, if you will, and so a lot of heat is produced—enough to weaken the rocks and perhaps even melt them.

However, there are pore fluids permeating the rocks—we often get our drinking water from underground aquifers, for example. As these fluids heat up, they expand, which modifies the shearing process. They produce expanding cushions of steam, essentially, which reduce the friction.

The waves generated by the shearing motion put an additional load on the fault ahead of the shear zone, so they actually affect how the shearing progresses. The shear tip sprouts at about three kilometers per second, or 6,700 miles per hour. So an earthquake is a highly dynamic, nonlinear system.

To make things even more interesting, a fault doesn't just sit still for hundreds of years, waiting for the next big earthquake. It's more like a living thing—there are slow slippages between earthquakes that constantly redistribute the forces in the system, and the exact point where an earthquake initiates depends a lot on these slow motions. So we simulate thousands of years of fault history that includes a few occasional, very fast events that last for a few seconds. These calculations are very time-consuming and memory-intense. The Geological and Planetary Sciences Division's supercomputer has several thousand processors, and we routinely use 200 to 400 of them, sometimes for weeks at a time. We would happily use the entire machine, but of course people would yell at us.

 

Q: How did you get into this line of work?

A: I've loved both mathematics and physics since I was a child. I was born in Ukraine, where my mom was a professor of applied mathematics and my dad was a civil engineer. They used to give me math and physics problems from a very early age. I did my undergraduate studies in applied mathematics in Kiev, and I was thinking of going into materials science. I came to the U.S. for graduate school, and my advisor at Harvard was working on materials failure and on earthquakes, which I found very interesting because it combined math and physics with a problem relevant to society.

My PhD was on frictional sliding and some initial models of earthquakes. Caltech is actually the perfect place to continue that, because it has world-class expertise in all relevant disciplines. I have wonderful colleagues, and the really fun part is working with them. I enjoy interacting with the experimentalists and talking to the people who make field observations or do radar measurements from satellites. They have different perspectives, different terminologies, and different views of the problem, so it's fun to try to explain to them what you mean, and to try to understand what they mean. And the most fun, of course, is when you come to an understanding that leads to new science in the end.

 

Q: Speaking of societal relevance, what does your work mean for us here in L.A.?

A: Large earthquakes, fortunately, are relatively rare, so we don't have detailed observations of very many of them. Our models, however, allow us to explore scenarios for potentially very damaging earthquakes that we haven't experienced. For example, faults have locked segments and creeping segments. The San Andreas fault has a creeping segment between Los Angeles and San Francisco, and the assumption has been that this segment will confine a large earthquake to either the southern or the northern part of the fault. Only one large urban area would be affected. However, our models show that a through-going rupture may be possible. If that happens, both Los Angeles and San Francisco are affected, and you have a much bigger problem on your hands.

 

Named for the late Caltech professor Earnest C. Watson, who founded the series in 1922, the Watson Lectures present Caltech and JPL researchers describing their work to the public. Many past Watson Lectures are available online at Caltech's iTunes U site.

Writer: 
Douglas Smith
Listing Title: 
Watson Lecture: "Under the Hood of the Earthquake Machine"
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Murray and Ortiz Elected to the National Academy of Engineering

Election brings Caltech faculty's membership in the academy to 35

PASADENA, Calif.—Richard M. Murray and Michael Ortiz of the California Institute of Technology (Caltech) have been elected to the National Academy of Engineering (NAE), an honor considered among the highest professional distinctions an engineer can receive. In total, the academy welcomed 69 new American members and 11 foreign associates this year.

"I am absolutely delighted that the Academy has elected Richard and Michael," says Ares Rosakis, the Theodore von Kármán Professor of Aeronautics, professor of mechanical engineering, and chair of the Division of Engineering and Applied Science at Caltech. "This is not only a recognition of their great contributions and unwavering commitment to engineering research and education, but also a confirmation of the great impact Caltech engineers and applied scientists are having on the field."

Richard Murray, the Thomas E. and Doris Everhart Professor of Control and Dynamical Systems and Bioengineering, was cited by the National Academy of Engineering for his "contributions in control theory and networked control systems with applications to aerospace engineering, robotics, and autonomy." His current work focuses on the application of feedback and control to networked systems, especially in the biological realm where he is interested in engineered biological circuits.

"It's a great honor to be elected as a member of the NAE," Murray says. "Caltech's strong support for junior faculty, our ability to recruit outstanding students and postdocs, and the highly collaborative nature of the academic environment have allowed my group to help identify important problem areas and make rapid progress in our research. I am particularly appreciative of all of the encouragement, mentoring, and support that I received as a junior faculty member from the Division of Engineering and Applied Science and my colleagues in mechanical engineering and control and dynamical systems."

Murray earned his BS in electrical engineering from Caltech in 1985, and his MS and PhD, both from the University of California, Berkeley in 1988 and 1990. He returned to his alma mater as an assistant professor of mechanical engineering in 1991 and was made an associate professor in 1997, a professor in 2000, the Everhart Professor of Control and Dynamical Systems in 2006, and the Everhart Professor of Control and Dynamical Systems and Bioengineering in 2009. He served as the chair of the Division of Engineering and Applied Science from 2000 until 2005 and as the director of Information Science and Technology from 2006 until 2009. Murray holds many distinctions. Among them, he is a fellow of the Institute for Electrical and Electronic Engineers, holds an honorary doctorate from Lund University, and won the 2006 Richard P. Feynman Prize for Excellence in Teaching.

Michael Ortiz, the Dotty and Dick Hayman Professor of Aeronautics and Mechanical Engineering, was cited for his "contributions to computational mechanics to advance the underpinnings of solid mechanics." He is currently the director of Caltech's Department of Energy/Predictive Science Academic Alliance Program's Center on High-Energy Density Dynamics of Materials. His research focuses on the multiscale modeling of materials in order to design and optimize novel materials.

"This is a wonderful and most pleasant surprise for me, especially given the support from colleagues and peers that it implies," Ortiz says of his election to the academy. "I regard this honor really as a recognition not only of the work done by myself, but also of the work of all my students and collaborators over the years. I am forever indebted to them."

Ortiz earned his BS in civil engineering from the Polytechnic University of Madrid, Spain, in 1977, and his MS and PhD in the same field from the University of California, Berkeley in 1978 and 1981. He served on the faculty at Brown University from 1984 until 1995, when he accepted a professorship at Caltech. Ortiz became the Dotty and Dick Hayman Professor of Aeronautics and Mechanical Engineering in 2004. He is a fellow of the American Academy of Arts and Sciences, the U.S. Association for Computational Mechanics and the International Association for Computational Mechanics, and has won many prizes, including the Humboldt Research Award for Senior U.S. Scientists and the Rodney Hill Prize in Solid Mechanics.

The election of Murray and Ortiz brings Caltech's total representation in the NAE to 35 faculty members and 11 trustees. The full class of new members brings the total NAE membership to 2,250 members and 211 foreign associates. 

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Creating New Quantum Building Blocks

Caltech researcher says diamond defects could serve as nodes of a quantum network

PASADENA, Calif.—Scientists have long dreamed of creating a quantum computer—a device rooted in the bizarre phenomena that transpire at the level of the very small, where quantum mechanics rules the scene. It is believed that such new computers could process currently unsolvable problems in seconds.  

Researchers have tried using various quantum systems, such as atoms or ions, as the basic, transistor-like units in simple quantum computation devices. Now, laying the groundwork for an on-chip optical quantum network, a team of researchers, including Andrei Faraon from the California Institute of Technology (Caltech), has shown that defects in diamond can be used as quantum building blocks that interact with one another via photons, the basic units of light.

The device is simple enough—it involves a tiny ring resonator and a tunnel-like optical waveguide, which both funnel light. Both structures, each only a few hundred nanometers wide, are etched in a diamond membrane and positioned close together atop a glass substrate. Within the resonator lies a nitrogen-vacancy center (NV center)—a defect in the structure of diamond in which a nitrogen atom replaces a carbon atom, and in which a nearby spot usually reserved for another carbon atom is simply empty. Such NV centers are photoluminescent, meaning they absorb and emit photons.

"These NV centers are like the building blocks of the network, and we need to make them interact—like having an electrical current connecting one transistor to another," explains Faraon, lead author on a paper describing the work in the New Journal of Physics. "In this case, photons do that job."

In recent years, diamond has become a heavily researched material for use in quantum photonic devices in part because the diamond lattice is able to protect impurities from excessive interactions. The so-called quietness it affords enables impurities—such as NV centers—to store information unaltered for relatively long periods of time.  

To begin their experiment, the researchers first cool the device below 10 Kelvin (−441.67 degrees Fahrenheit) and then shine green laser light on the NV center, causing it to reach an excited state and then emit red light. As the red light circles within the resonator, it constructively interferes with itself, increasing its intensity. Slowly, the light then leaks into the nearby waveguide, which channels the photons out through gratings at either end, scattering the light out of the plane of the chip.

The emitted photons have the property of being correlated, or entangled, with the NV center from which they came. This mysterious quality of entanglement, which makes two quantum states inextricably linked in such a way that any information you learn about one provides information about the other, is a necessary ingredient for quantum computation. It enables a large amount of information to be stored and processed by fewer components that take up a small amount of space.

"Right now we only have one nitrogen-vacancy center that's emitting photons, but in the future we envision creating multiple NV centers that emit photons on the same chip," Faraon says. "By measuring these photons we could create entanglement among multiple NV centers on the chip."

And that's important because, in order to make a quantum computer, you would need millions—maybe billions—of these units. "As you can see, we're just working at making one or a few," Faraon says. "But there are other applications down the line that are easier to achieve." For example, a quantum network with a couple hundred units could simulate the behavior of a complex molecule—a task that conventional computers struggle with.

Going forward, Faraon plans to investigate whether other materials can behave similarly to diamond in an optical quantum network.

In addition to Faraon, the authors on the paper, "Quantum photonic devices in single-crystal diamond," are Charles Santori, Zhihong Huang, Kai-Mei Fu, Victor Acosta, David Fattal, and Raymond Beausoleil of Hewlett-Packard Laboratories, in Palo Alto, California. Fu is now an assistant professor at the University of Washington in Seattle, Washington. The work was supported by the Defense Advanced Research Projects Agency and The Regents of the University of California.  

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Jorgensen Laboratory Awarded LEED Platinum Certification

The recent renovations of the Jorgensen Laboratory included many upgrades that were designed to reflect Caltech's commitment to sustainability. Now the building has achieved LEED Platinum certification, the highest honor of the U.S. Green Building Council.

"Achieving Platinum certification on this building was particularly rewarding given the fact that the building will serve as a studio for sustainable energy research," says John Onderdonk, director of sustainability programs at Caltech.

LEED—Leadership in Energy and Environmental Design—is a voluntary program that provides verification of green building design through a survey of prerequisites and guideline credits. To obtain LEED certification, a building must earn a minimum of 40 points on a 110-point LEED rating system scale. Jorgensen received 87 points—80 is the minimum needed for Platinum certification—for its conservation features, which include a "green" roof, natural ventilation systems, use of on-campus solar photovoltaic power, and low-flow water fixtures, among other environmentally conscious details.

Jorgensen is one of 20 LEED Platinum-certified higher-education lab buildings in the country, and one of seven in the state. It is the second higher-education lab building in the state to receive LEED Platinum certification under the current rating system. Caltech's renovation of the Linde + Robinson Lab also received LEED Platinum status last year.

The Jorgensen Lab officially opened in October 2012 and houses scientists who are focused on clean-energy research

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Notes from the Back Row: "Engineering with Impact"

If you hit something hard enough, it will break, and the consequences can be catastrophic. A space rock roughly the size of Pasadena killed the dinosaurs when it hit the Earth at about 45,000 miles per hour, but even something as small as a bird hitting a turbine blade can bring down an airplane. The damage occurs in the blink of an eye as unimaginable pressures are fleetingly focused on the hapless chunk of rock or metal. The key to survival is to disperse those forces. But how? Caltech professor Ravi Ravichandran is trying to find out.

Guruswami "Ravi" Ravichandran is the John E. Goode, Jr., Professor of Aerospace and professor of mechanical engineering and the director of the Graduate Aerospace Laboratories at Caltech. His PhD thesis on the fracture dynamics of metals under extreme impacts, written at Brown University in 1986, remains one of the classic papers in the field.

At Caltech, Ravichandran studies impacts that pack a wallop of up to a million times the pressure of Earth's atmosphere. Such extreme pressures are actually quite mundane: a head-on collision at 65 miles per hour exerts a force of some 7,000 atmospheres during the millisecond that the vehicles' steel frames buckle. (By contrast, the pressure at the bottom of the Mariana Trench in the western Pacific, the deepest point in the world's oceans, is a mere 1,000 atmospheres.) In a typical experiment, a reconditioned naval gun from World War II shoots an aluminum projectile at a copper plate, compressing it by as much as 30 percent for a millionth of a second. Meanwhile, a laser "camera" records the ripples created by the projectile's kinetic energy as it turns into pressure waves within the copper plate.

The best way we know to dissipate these waves is to pass them through alternating layers of very stiff and very elastic materials. This is the principle behind body armor and bulletproof glass, as Ravichandran vividly demonstrated during his talk by showing a video clip produced by an armored-car company. In the clip, the company's CEO stood behind a bulletproof windshield while his assistant peppered it with three rounds from an AK-47. Spiderwebs of cracks formed in the inner layer of glass and license-plate-sized fragments of the outer layer were blasted free, but the layer of polymer sandwiched between the glass sheets stopped the slugs. If one layer of elastic is good, more layers should be even better. The logical extreme—an infinite number of layers—presents certain manufacturing challenges, so "we're extending this idea of layered media into particulate composites in order to make realistic engineering materials for shock-protection applications," Ravichandran says. Think high-tech sandbags, in other words.  

"Engineering with Impact" is available for download in HD from Caltech on iTunesU. (Episode 13)

Writer: 
Doug Smith
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

TEDxCaltech: Advancing Humanoid Robots

This week we will be highlighting the student speakers who auditioned and were selected to give five-minute talks about their brain-related research at TEDxCaltech: The Brain, a special event that will take place on Friday, January 18, in Beckman Auditorium. 

In the spirit of ideas worth spreading, TED has created a program of local, self-organized events called TEDx. Speakers are asked to give the talk of their lives. Live video coverage of the TEDxCaltech experience will be available during the event at http://tedxcaltech.caltech.edu.

When Matanya Horowitz started his undergraduate work in 2006 at University of Colorado at Boulder, he knew that he wanted to work in robotics—mostly because he was disappointed that technology had not yet made good on his sci-fi–inspired dreams of humanoid robots.

"The best thing we had at the time was the Roomba, which is a great product, but compared to science fiction it seemed really diminutive," says Horowitz. He therefore decided to major in not just electrical engineering, but also economics, applied math, and computer science. "I thought that the answer to better robots would lie somewhere in the middle of these different subjects, and that maybe each one held a different key," he explains.

Now a doctoral student at Caltech—he earned his masters in the same four years as his multiple undergrad degrees—Horowitz is putting his range of academic experience to work in the labs of engineers Joel Burdick and John Doyle to help advance robotics and intelligent systems. As a member of the control and dynamical systems group, he is active in several Defense Advanced Research Projects Agency (DARPA) challenges that seek to develop better control mechanisms for robotic arms, as well as develop humanoid robots that can do human-like tasks in dangerous situations, such as disable bombs or enter nuclear power plants during an emergency. 

But beneficial advances in robotics also bring challenges. Inspired as a kid by the robot tales of Isaac Asimov, Horowitz has long been interested in how society might be affected by robots.

"As I began programming just on my own, I saw how easy it was to create something that at least seemed to act with intelligence," he says. "It was interesting to me that we were so close to humanoid robots and that doing these things was so easy. But we also have all these implications we need to think about."

Horowitz's TEDx talk will explore some of the challenges of building and controlling something that needs to interact in the physical world. He says he's thrilled to have the opportunity to speak at TEDx, not just for the chance to talk to a general audience about his work, but also to hopefully inspire others by his enthusiasm for the field.

"Recently, there has been such a monumental shift from what robots were capable of even just five years ago, and people should be really excited about this," says Horowitz. "We've been hearing about robots for 30, 40 years—they've always been 'right around the corner.' But now we can finally point to one and say, 'Here it is, literally coming around a corner.'"

 

 

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community
Friday, January 25, 2013
Annenberg 121

Course Ombudspeople Lunch

Faulty Behavior

New earthquake fault models show that "stable" zones may contribute to the generation of massive earthquakes

PASADENA, Calif.—In an earthquake, ground motion is the result of waves emitted when the two sides of a fault move—or slip—rapidly past each other, with an average relative speed of about three feet per second. Not all fault segments move so quickly, however—some slip slowly, through a process called creep, and are considered to be "stable," or not capable of hosting rapid earthquake-producing slip.  One common hypothesis suggests that such creeping fault behavior is persistent over time, with currently stable segments acting as barriers to fast-slipping, shake-producing earthquake ruptures. But a new study by researchers at the California Institute of Technology (Caltech) and the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) shows that this might not be true.

"What we have found, based on laboratory data about rock behavior, is that such supposedly stable segments can behave differently when an earthquake rupture penetrates into them. Instead of arresting the rupture as expected, they can actually join in and hence make earthquakes much larger than anticipated," says Nadia Lapusta, professor of mechanical engineering and geophysics at Caltech and coauthor of the study, published January 9 in the journal Nature.

She and her coauthor, Hiroyuki Noda, a scientist at JAMSTEC and previously a postdoctoral scholar at Caltech, hypothesize that this is what occurred in the 2011 magnitude 9.0 Tohoku-Oki earthquake, which was unexpectedly large.

Fault slip, whether fast or slow, results from the interaction between the stresses acting on the fault and friction, or the fault's resistance to slip. Both the local stress and the resistance to slip depend on a number of factors such as the behavior of fluids permeating the rocks in the earth's crust. So, the research team formulated fault models that incorporate laboratory-based knowledge of complex friction laws and fluid behavior, and developed computational procedures that allow the scientists to numerically simulate how those model faults will behave under stress.

"The uniqueness of our approach is that we aim to reproduce the entire range of observed fault behaviors—earthquake nucleation, dynamic rupture, postseismic slip, interseismic deformation, patterns of large earthquakes—within the same physical model; other approaches typically focus only on some of these phenomena," says Lapusta.

In addition to reproducing a range of behaviors in one model, the team also assigned realistic fault properties to the model faults, based on previous laboratory experiments on rock materials from an actual fault zone—the site of the well-studied 1999 magnitude 7.6 Chi-Chi earthquake in Taiwan.

"In that experimental work, rock materials from boreholes cutting through two different parts of the fault were studied, and their properties were found to be conceptually different," says Lapusta. "One of them had so-called velocity-weakening friction properties, characteristic of earthquake-producing fault segments, and the other one had velocity-strengthening friction, the kind that tends to produce stable creeping behavior under tectonic loading. However, these 'stable' samples were found to be much more susceptible to dynamic weakening during rapid earthquake-type motions, due to shear heating."

Lapusta and Noda used their modeling techniques to explore the consequences of having two fault segments with such lab-determined fault-property combinations. They found that the ostensibly stable area would indeed occasionally creep, and often stop seismic events, but not always. From time to time, dynamic rupture would penetrate that area in just the right way to activate dynamic weakening, resulting in massive slip. They believe that this is what happened in the Chi-Chi earthquake; indeed, the quake's largest slip occurred in what was believed to be the "stable" zone.

"We find that the model qualitatively reproduces the behavior of the 2011 magnitude 9.0 Tohoku-Oki earthquake as well, with the largest slip occurring in a place that may have been creeping before the event," says Lapusta. "All of this suggests that the underlying physical model, although based on lab measurements from a different fault, may be qualitatively valid for the area of the great Tohoku-Oki earthquake, giving us a glimpse into the mechanics and physics of that extraordinary event."

If creeping segments can participate in large earthquakes, it would mean that much larger events than seismologists currently anticipate in many areas of the world are possible. That means, Lapusta says, that the seismic hazard in those areas may need to be reevaluated.

For example, a creeping segment separates the southern and northern parts of California's San Andreas Fault. Seismic hazard assessments assume that this segment would stop an earthquake from propagating from one region to the other, limiting the scope of a San Andreas quake. However, the team's findings imply that a much larger event may be possible than is now anticipated—one that might involve both the Los Angeles and San Francisco metropolitan areas.

"Lapusta and Noda's realistic earthquake fault models are critical to our understanding of earthquakes—knowledge that is essential to reducing the potential catastrophic consequences of seismic hazards," says Ares Rosakis, chair of Caltech's division of engineering and applied science. "This work beautifully illustrates the way that fundamental, interdisciplinary research in the mechanics of seismology at Caltech is having a positive impact on society."

Now that they've been proven to qualitatively reproduce the behavior of the Tohoku-Oki quake, the models may be useful for exploring future earthquake scenarios in a given region, "including extreme events," says Lapusta. Such realistic fault models, she adds, may also be used to study how earthquakes may be affected by additional factors such as man-made disturbances resulting from geothermal energy harvesting and CO2 sequestration. "We plan to further develop the modeling to incorporate realistic fault geometries of specific well-instrumented regions, like Southern California and Japan, to better understand their seismic hazard."

"Creeping fault segments can turn from stable to destructive due to dynamic weakening" appears in the January 9 issue of the journal Nature. Funding for this research was provided by the National Science Foundation; the Southern California Earthquake Center; the Gordon and Betty Moore Foundation; and the Ministry of Education, Culture, Sports, Science and Technology in Japan.

Writer: 
Katie Neith
Frontpage Title: 
Faulty Behavior: “Stable” Zones May Contribute to Massive Earthquakes
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
bbell2's picture

Mory Gharib Named NAI Charter Fellow

Caltech's Mory Gharib has been named a charter fellow of the National Academy of Inventors (NAI).

According to the NAI, election to fellow status is a "high professional distinction accorded to academic inventors who have demonstrated a highly prolific spirit of innovation in creating or facilitating outstanding inventions that have made a tangible impact on quality of life, economic development, and the welfare of society."

Gharib (PhD '83) is the Hans W. Liepmann Professor of Aeronautics and professor of bioinspired engineering at Caltech. He is also the Institute's vice provost for research. Gharib's research group at Caltech studies examples from the natural world—fins, wings, blood vessels, embryonic structures, and entire organisms—to gain inspiration for inventions that have practical uses in power generation, drug delivery, dentistry, and more. Gharib is responsible for more than 59 U.S. patents.

Gharib will be formally inducted as a charter fellow during the second annual conference of the National Academy of Inventors in Tampa, Florida, in February.

Academic inventors and innovators elected to the rank of NAI Charter Fellow were nominated by their peers "for outstanding contributions to innovation in areas such as patents and licensing, innovative discovery and technology, significant impact on society, and support and enhancement of innovation."

"The natural world serves as the inspiration for many of my inventions," Gharib says. "But it is also inspiring to have been selected as a charter fellow of the NAI and to be included in a group with so many other leading innovators."

Writer: 
Brian Bell
Images: 
Frontpage Title: 
Gharib Named NAI Charter Fellow
Listing Title: 
Gharib Named NAI Charter Fellow
Contact: 
Writer: 
Exclude from News Hub: 
Yes

Pages

Subscribe to RSS - EAS