Caltech Senior Wins Churchill Scholarship

Caltech senior Andrew Meng has been selected to receive a Churchill Scholarship, which will fund his graduate studies at the University of Cambridge for the next academic year. Meng, a chemistry and physics major, was one of only 14 students nationwide who were chosen to receive the fellowship this year.

Taking full advantage of Caltech's strong tradition of undergraduate research, Meng has worked since his freshman year in the lab of Nate Lewis, the George L. Argyros Professor and professor of chemistry. Over the course of three Summer Undergraduate Research Fellowships (SURFs) and several terms in the lab, Meng has investigated various applications of silicon microwire solar cells. Lewis's group has shown that arrays of these ultrathin wires hold promise as a cost-effective way to construct solar cells that can convert light into electricity with relatively high efficiencies.

Meng, who grew up in Baton Rouge, Louisiana, first studied some of the fundamental limitations of silicon microwires in fuel-forming reactions. In these applications, it is believed that the microwires can harness energy from the sun to drive chemical reactions such as the production of hydrogen and oxygen from splitting water. Meng's work showed that the geometry of the microwires would not limit the fuel-forming reaction as some had expected.

More recently, Meng has turned his attention to using silicon microwires to generate electricity. He is developing an inexpensive electrical contact to silicon microwire chips, using a method that facilitates scale-up and can be applied to flexible solar cells.

"Andrew is one of the best undergraduates that I have had the pleasure of working with in over a decade," says Lewis. "He excels in academics, in leadership, and in research. I believe he is truly worthy of the distinction of receiving a Churchill Fellowship. " 

As he pursues a Master of Philosophy degree in chemistry at the University of Cambridge over the next year, Meng will work in the group of theoretical chemist Michiel Sprik. He plans to apply computational methods to his studies of fuel-forming reactions using solar-energy materials.

"I'm very grateful for this opportunity to learn a computational perspective, since up until now I've been doing experimental work," Meng says. "I'm very excited, and most importantly, I'd like to thank Caltech and all of my mentors and co-mentors, without whom I would not be in this position today."

According to the Winston Churchill Foundation's website, the Churchill Scholarship program "offers American citizens of exceptional ability and outstanding achievement the opportunity to pursue graduate studies in engineering, mathematics, or the sciences at Cambridge. One of the newer colleges at the University of Cambridge, Churchill College was built as the national and Commonwealth tribute to Sir Winston, who in the years after the Second World War presciently recognized the growing importance of science and technology for prosperity and security. Churchill College focuses on the sciences, engineering, and mathematics." The first Churchill Scholarships were awarded in 1963, and this year's recipients bring the total to 479 Churchill Scholars.

Each year, a select group of universities, including Caltech, is eligible to nominate students for consideration for the scholarship. Meng is the seventh Caltech student to have won the award since the year 2000. A group of Caltech faculty members and researchers work with Lauren Stolper, director of fellowships advising, to identify and nominate candidates. This year, the members of the group were Churchill Scholar alumni John Brady, the Chevron Professor of Chemical Engineering and professor of mechanical engineering; Mitchio Okumura, professor of chemical physics; Alan Cummings, senior research scientist; and Eric Rains, professor of mathematics.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Sorting Out Stroking Sensations

Caltech biologists find individual neurons in the skin that react to massage

PASADENA, Calif.—The skin is a human being's largest sensory organ, helping to distinguish between a pleasant contact, like a caress, and a negative sensation, like a pinch or a burn. Previous studies have shown that these sensations are carried to the brain by different types of sensory neurons that have nerve endings in the skin. Only a few of those neuron types have been identified, however, and most of those detect painful stimuli. Now biologists at the California Institute of Technology (Caltech) have identified in mice a specific class of skin sensory neurons that reacts to an apparently pleasurable stimulus.

More specifically, the team, led by David J. Anderson, Seymour Benzer Professor of Biology at Caltech, was able to pinpoint individual neurons that were activated by massage-like stroking of the skin. The team's results are outlined in the January 31 issue of the journal Nature.

"We've known a lot about the neurons that detect things that make us hurt or feel pain, but we've known much less about the identity of the neurons that make us feel good when they are stimulated," says Anderson, who is also an investigator with the Howard Hughes Medical Institute. "Generally it's a lot easier to study things that are painful because animals have evolved to become much more sensitive to things that hurt or are fearful than to things that feel good. Showing a positive influence of something on an animal model is not that easy."

In fact, the researchers had to develop new methods and technologies to get their results. First, Sophia Vrontou, a postdoctoral fellow in Anderson's lab and the lead author of the study, developed a line of genetically modified mice that had tags, or molecular markers, on the neurons that the team wanted to study. Then she placed a molecule in this specific population of neurons that fluoresced, or lit up, when the neurons were activated.

"The next step was to figure out a way of recording those flashes of light in those neurons in an intact mouse while stroking and poking its body," says Anderson. "We took advantage of the fact that these sensory neurons are bipolar in the sense that they send one branch into the skin that detects stimuli, and another branch into the spinal cord to relay the message detected in the skin to the brain."

The team obtained the needed data by placing the mouse under a special microscope with very high magnification and recording the level of fluorescent light in the fibers of neurons in the spinal cord as the animal was stroked, poked, tickled, and pinched. Through a painstaking process of applying stimuli to one tiny area of the animal's body at a time, they were able to confirm that certain neurons lit up only when stroked. A different class of neurons, by contrast, was activated by poking or pinching the skin, but not by stroking.

"Massage-like stroking is a stimulus that, if were we to experience it, would feel good to us, but as scientists we can't just assume that because something feels good to us, it has to also feel good to an animal," says Anderson. "So we then had to design an experiment to show that artificially activating just these neurons—without actually stroking the mouse—felt good to the mouse."

The researchers did this by creating a box that contained left, right, and center rooms connected by little doors. The left and right rooms were different enough that a mouse could distinguish them through smell, sight, and touch. In the left room, the mouse received an injection of a drug that selectively activated the neurons shown to detect massage-like stroking. In the room on the right, the mouse received a control injection of saline. After a few sessions in each outer room, the animal was placed in the center, with the doors open to see which room it preferred. It clearly favored the room where the massage-sensitive neurons were activated. According to Anderson, this was the first time anyone has used this type of conditioned place-preference experiment to show that activating a specific population of neurons in the skin can actually make an animal experience a pleasurable or rewarding state—in effect, to "feel good."

The team's findings are significant for several reasons, he says. First, the methods that they developed give scientists who have discovered a new kind of neuron a way to find out what activates that neuron in the skin.

"Since there are probably dozens of different kinds of neurons that innervate the skin, we hope this will advance the field by making it possible to figure out all of the different kinds of neurons that detect various types of stimuli," explains Anderson. The second reason the results are important, he says, "is that now that we know these neurons detect massage-like stimuli, the results raise new sets of questions about which molecules in those neurons help the animal detect stroking but not poking."

The other benefit of their new methods, Anderson says, is that they will allow researchers to, in principle, trace the circuitry from those neurons up into the brain to ask why and how activating these neurons makes the animal feel good, whereas activating other neurons that are literally right next to them in the skin makes the animal feel bad.

"We are now most interested in how these neurons communicate to the brain through circuits," says Anderson. "In other words, what part of the circuit in the brain is responsible for the good feeling that is apparently produced by activating these neurons? It may seem frivolous to be identifying massage neurons in a mouse, but it could be that some good might come out of this down the road."

Allan M. Wong, a senior research fellow in biology at Caltech, and Kristofer K. Rau and Richard Koerber from the University of Pittsburgh were also coauthors on the Nature paper, "Genetic identification of C fibers that detect massage-like stroking of hairy skin in vivo." Funding for this research was provided by the National Institutes of Health, the Human Frontiers Science Program, and the Helen Hay Whitney Foundation.

Writer: 
Katie Neith
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

TEDxCaltech: Advancing Humanoid Robots

This week we will be highlighting the student speakers who auditioned and were selected to give five-minute talks about their brain-related research at TEDxCaltech: The Brain, a special event that will take place on Friday, January 18, in Beckman Auditorium. 

In the spirit of ideas worth spreading, TED has created a program of local, self-organized events called TEDx. Speakers are asked to give the talk of their lives. Live video coverage of the TEDxCaltech experience will be available during the event at http://tedxcaltech.caltech.edu.

When Matanya Horowitz started his undergraduate work in 2006 at University of Colorado at Boulder, he knew that he wanted to work in robotics—mostly because he was disappointed that technology had not yet made good on his sci-fi–inspired dreams of humanoid robots.

"The best thing we had at the time was the Roomba, which is a great product, but compared to science fiction it seemed really diminutive," says Horowitz. He therefore decided to major in not just electrical engineering, but also economics, applied math, and computer science. "I thought that the answer to better robots would lie somewhere in the middle of these different subjects, and that maybe each one held a different key," he explains.

Now a doctoral student at Caltech—he earned his masters in the same four years as his multiple undergrad degrees—Horowitz is putting his range of academic experience to work in the labs of engineers Joel Burdick and John Doyle to help advance robotics and intelligent systems. As a member of the control and dynamical systems group, he is active in several Defense Advanced Research Projects Agency (DARPA) challenges that seek to develop better control mechanisms for robotic arms, as well as develop humanoid robots that can do human-like tasks in dangerous situations, such as disable bombs or enter nuclear power plants during an emergency. 

But beneficial advances in robotics also bring challenges. Inspired as a kid by the robot tales of Isaac Asimov, Horowitz has long been interested in how society might be affected by robots.

"As I began programming just on my own, I saw how easy it was to create something that at least seemed to act with intelligence," he says. "It was interesting to me that we were so close to humanoid robots and that doing these things was so easy. But we also have all these implications we need to think about."

Horowitz's TEDx talk will explore some of the challenges of building and controlling something that needs to interact in the physical world. He says he's thrilled to have the opportunity to speak at TEDx, not just for the chance to talk to a general audience about his work, but also to hopefully inspire others by his enthusiasm for the field.

"Recently, there has been such a monumental shift from what robots were capable of even just five years ago, and people should be really excited about this," says Horowitz. "We've been hearing about robots for 30, 40 years—they've always been 'right around the corner.' But now we can finally point to one and say, 'Here it is, literally coming around a corner.'"

 

 

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

TEDxCaltech: If You Click a Cookie with a Mouse

This week we will be highlighting the student speakers who auditioned and were selected to give five-minute talks about their brain-related research at TEDxCaltech: The Brain, a special event that will take place on Friday, January 18, in Beckman Auditorium. 

In the spirit of ideas worth spreading, TED has created a program of local, self-organized events called TEDx. Speakers are asked to give the talk of their lives. Live video coverage of the TEDxCaltech experience will be available during the event at http://tedxcaltech.caltech.edu.

When offered spinach or a cookie, how do you decide which to eat? Do you go for the healthy choice or the tasty one? To study the science of decision making, researchers in the lab of Caltech neuroeconomist Antonio Rangel analyze what happens inside people's brains as they choose between various kinds of food. The researchers typically use functional magnetic resonance imaging (fMRI) to measure the changes in oxygen flow through the brain; these changes serve as proxies for spikes or dips in brain activity. Recently, however, investigators have started using a new technique that may better tease out how you choose between the spinach or the cookie—a decision that's often made in a fraction of a second.

While fMRI is a powerful method, it can only measure changes in brain activity down to the scale of a second or so. "That's not fast enough because these decisions are made sometimes within half a second," says Caltech senior Joy Lu, who will be talking about her research in Rangel's lab at TEDx Caltech. Instead of using fMRI, Lu—along with postdoctoral scholar Cendri Hutcherson and graduate student Nikki Sullivan—turned to the standard old computer mouse.

During the experiments—which are preliminary, as the researchers are still conducting and refining them—volunteers rate 250 kinds of food for healthiness and tastiness. The choices range from spinach and cookies to broccoli and chips. Then, the volunteers are given a choice between two of those items, represented by pictures on a computer screen. When they decide which option they want, they click with their mouse. But while they mull over their choices, the paths of their mouse cursor are being tracked—the idea being that the cursor paths may reveal how the volunteers arrive at their final decisions.

For example, if the subject initially feels obligated to be healthy, the cursor may hover over the spinach a moment before finally settling on the cookie. Or, if the person is immediately drawn to the sweet treat before realizing that health is a better choice, the cursor may hover over the cookie first.

Lu, Hutcherson, and Sullivan are using computer models to find cursor-path patterns or trends that may offer insight into the factors that influence such decisions. Do the paths differ between those who value health over taste and those who favor taste more?

Although the researchers are still refining their computer algorithms and continuing their experiments, they have some preliminary results. They found that with many people, for example, the cursor first curves toward one choice before ending up at the other. The time it takes for someone's health consciousness to kick in seems to be longer than the time it takes for people to succumb to cravings for something delicious.

After graduation, Lu plans to go to graduate school in marketing, where she'll use not only neuroscience techniques but also field studies to investigate consumer behavior. She might even compare the two methods. "Using neuroscience in marketing is a very new thing," she says. "That's what draws me toward it. We can't answer all the questions we want to answer just using field studies. You have to look at what's going on in a person's mind."

Writer: 
Marcus Woo
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Research Update: Atomic Motions Help Determine Temperatures Inside Earth

In December 2011, Caltech mineral-physics expert Jennifer Jackson reported that she and a team of researchers had used diamond-anvil cells to compress tiny samples of iron—the main element of the earth's core. By squeezing the samples to reproduce the extreme pressures felt at the core, the team was able to get a closer estimate of the melting point of iron. At the time, the measurements that the researchers made were unprecedented in detail. Now, they have taken that research one step further by adding infrared laser beams to the mix.

The lasers are a source of heat that, when sent through the compressed iron samples, warm them up to the point of melting.  And because the earth's core consists of a solid inner region surrounded by a liquid outer shell, the melting temperature of iron at high pressure provides an important reference point for the temperature distribution within the earth's core.

"This is the first time that anyone has combined Mössbauer spectroscopy and heating lasers to detect melting in compressed samples," says Jackson, a professor of mineral physics at Caltech and lead author of a recent paper in the journal Earth and Planetary Science Letters that outlined the team's new method. "What we found is that iron, compared to previous studies, melts at higher temperatures than what has been reported in the past."

Earlier research by other teams done at similar compressions—around 80 gigapascals—reported a range of possible melting points that topped out around 2600 Kelvin (K). Jackson's latest study indicates an iron melting point at this pressure of approximately 3025 K, suggesting that the earth's core is likely warmer than previously thought.

Knowing more about the temperature, composition, and behavior of the earth's core is essential to understanding the dynamics of the earth's interior, including the processes responsible for maintaining the earth's magnetic field. While iron makes up roughly 90 percent of the core, the rest is thought to be nickel and light elements—like silicon, sulfur, or oxygen—that are alloyed, or mixed, with the iron.

To develop and perform these experiments, Jackson worked closely with the Inelastic X-ray and Nuclear Resonant Scattering Group at the Advanced Photon Source at Argonne National Laboratory in Illinois. By laser heating the iron sample in a diamond-anvil cell and monitoring the dynamics of the iron atoms via a technique called synchrotron Mössbauer spectroscopy (SMS), the researchers were able to pinpoint a melting temperature for iron at a given pressure. The SMS signal is sensitively related to the dynamical behavior of the atoms, and can therefore detect when a group of atoms is in a molten state.

She and her team have begun experiments on iron alloys at even higher pressures, using their new approach.

"What we're working toward is a very tight constraint on the temperature of the earth's core," says Jackson. "A number of important geophysical quantities, such as the movement and expansion of materials at the base of the mantle, are dictated by the temperature of the earth's core."

"Our approach is a very elegant way to look at melting because it takes advantage of the physical principle of recoilless absorption of X-rays by nuclear resonances—the basis of the Mössbauer effect—for which Rudolf Mössbauer was awarded the Nobel Prize in Physics," says Jackson. "This particular approach to study melting has not been done at high pressures until now."

Jackson's findings not only tell us more about our own planet, but could indicate that other planets with iron-rich cores, like Mercury and Mars, may have warmer internal temperatures as well.

Her paper, "Melting of compressed iron by monitoring atomic dynamics," was published in Earth and Planetary Science Letters on January 8, 2013.

Writer: 
Katie Neith
Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

TEDxCaltech: Surmounting the Blood-Brain Barrier

This week we will be highlighting the student speakers who auditioned and were selected to give five-minute talks about their brain-related research at TEDxCaltech: The Brain, a special event that will take place on Friday, January 18, in Beckman Auditorium. 

In the spirit of ideas worth spreading, TED has created a program of local, self-organized events called TEDx. Speakers are asked to give the talk of their lives. Live video coverage of the TEDxCaltech experience will be available during the event at http://tedxcaltech.caltech.edu.

The brain needs its surroundings to be just right. That is, unlike some internal organs, such as the liver, which can process just about anything that comes its way, the brain needs to be protected and to have a chemical environment with the right balance of proteins, sugars, salts, and other metabolites. 

That fact stood out to Caltech MD/PhD candidate and TEDxCaltech speaker Devin Wiley when he was studying medicine at the Keck School of Medicine of USC. "In certain cases, one bacterium detected in the brain can be a medical emergency," he says. "So the microenvironment needs to be highly protected and regulated for the brain to function correctly."

Fortunately, a semipermeable divide, known as the blood-brain barrier, is very good at maintaining such an environment for the brain. This barricade—made up of tightly packed blood-vessel cells—is effective at precisely controlling which molecules get into and out of the brain. Because the blood-brain barrier regulates the molecular traffic into the brain, it presents a significant challenge for anyone wanting to deliver therapeutics to the brain. 

At Caltech, Wiley has been working with his advisor, Mark Davis, the Warren and Katharine Schlinger Professor of Chemical Engineering, to develop a work-around—a way to sneak therapeutics past the barrier and into the brain to potentially treat neurologic diseases such as Alzheimer's and Parkinson's. The scientists' strategy is to deliver large-molecule therapeutics (which are being developed by the Davis lab as well as other research groups) tucked inside nanoparticles that have proteins attached to their surface. These proteins will bind specifically to receptors on the blood-brain barrier, allowing the nanoparticles and their therapeutic cargo to be shuttled across the barrier and released into the brain.

"In essence, this is like a Trojan horse," Wiley explains. "You're tricking the blood-brain barrier into transporting drugs to the brain that normally wouldn't get in."

During his five-minute TEDxCaltech talk on Friday, January 18, Wiley will describe this approach and his efforts to design nanoparticles that can transport and release therapeutics into the brain.

For Wiley, the issue of delivering therapeutics to the brain is more than a fascinating research problem. His grandmother recently passed away from Alzheimer's disease, and his wife's grandmother also suffers from the neurodegenerative disorder.

"This is something that affects a lot of people," Wiley says. "Treatments for cardiovascular diseases, cancer, and infectious diseases are really improving. However, better treatments for brain diseases are not being discovered as quickly. So what are the issues? I want to tell the story of one of them."

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

A Cloudy Mystery

A puzzling cloud near the galaxy's center may hold clues to how stars are born

PASADENA, Calif.—It's the mystery of the curiously dense cloud. And astronomers at the California Institute of Technology (Caltech) are on the case.

Near the crowded galactic center, where billowing clouds of gas and dust cloak a supermassive black hole three million times as massive as the sun—a black hole whose gravity is strong enough to grip stars that are whipping around it at thousands of kilometers per second—one particular cloud has baffled astronomers. Indeed, the cloud, dubbed G0.253+0.016, defies the rules of star formation.

In infrared images of the galactic center, the cloud—which is 30 light-years long—appears as a bean-shaped silhouette against a bright backdrop of dust and gas glowing in infrared light. The cloud's darkness means it is dense enough to block light.

According to conventional wisdom, clouds of gas that are this dense should clump up to create pockets of even denser material that collapse due to their own gravity and eventually form stars. One such gaseous region famed for its prodigious star formation is the Orion Nebula. And yet, although the galactic-center cloud is 25 times denser than Orion, only a few stars are being born there—and even then, they are small. In fact, the Caltech astronomers say, its star-formation rate is 45 times lower than what astronomers might expect from such a dense cloud.

"It's a very dense cloud and it doesn't form any massive stars—which is very weird," says Jens Kauffmann, a senior postdoctoral scholar at Caltech.

In a series of new observations, Kauffmann, along with Caltech postdoctoral scholar Thushara Pillai and Qizhou Zhang of the Harvard-Smithsonian Center for Astrophysics, have discovered why: not only does it lack the necessary clumps of denser gas, but the cloud itself is swirling so fast that it can't settle down to collapse into stars.

The results, which show that star formation may be more complex than previously thought and that the presence of dense gas does not automatically imply a region where such formation occurs, may help astronomers better understand the process.

The team presented their findings—which have been recently accepted for publication in the Astrophysical Journal Letters—at the 221st meeting of the American Astronomical Society in Long Beach, California.

To determine whether the cloud contained clumps of denser gas, called dense cores, the team used the Submillimeter Array (SMA), a collection of eight radio telescopes on top of Mauna Kea in Hawaii. In one possible scenario, the cloud does contain these dense cores, which are roughly 10 times denser than the rest of the cloud, but strong magnetic fields or turbulence in the cloud disturbs them, thus preventing them from turning into full-fledged stars.

However, by observing the dust mixed into the cloud's gas and measuring N2H+—an ion that can only exist in regions of high density and is therefore a marker of very dense gas—the astronomers found hardly any dense cores. "That was very surprising," Pillai says. "We expected to see a lot more dense gas."

Next, the astronomers wanted to see if the cloud is being held together by its own gravity—or if it is swirling so fast that it is on the verge of flying apart. If it is churning too fast, it can't form stars. Using the Combined Array for Research in Millimeter-wave Astronomy (CARMA)—a collection of 23 radio telescopes in eastern California run by a consortium of institutions, of which Caltech is a member—the astronomers measured the velocities of the gas in the cloud and found that it is up to 10 times faster than is normally seen in similar clouds. This particular cloud, the astronomers found, was barely held together by its own gravity. In fact, it may soon fly apart.

The CARMA data revealed yet another surprise: the cloud is full of silicon monoxide (SiO), which is only present in clouds where streaming gas collides with and smashes apart dust grains, releasing the molecule. Typically, clouds only contain a smattering of the compound. It is usually observed when gas flowing out from young stars plows back into the cloud from which the stars were born. But the extensive amount of SiO in the galactic-center cloud suggests that it may consist of two colliding clouds, whose impact sends shockwaves throughout the galactic-center cloud. "To see such shocks on such large scales is very surprising," Pillai says.

G0.253+0.016 may eventually be able to make stars, but to do so, the researchers say, it will need to settle down so that it can build dense cores, a process that could take several hundred thousand years. But during that time, the cloud will have traveled a great distance around the galactic center, and it may crash into other clouds or be yanked apart by the gravitational pull of the galactic center. In such a disruptive environment, the cloud may never give birth to stars.

The findings also further muddle another mystery of the galactic center: the presence of young star clusters. The Arches Cluster, for example, contains about 150 bright, massive, young stars, which only live for a few million years. Because that is too short an amount of time for the stars to have formed elsewhere and migrated to the galactic center, they must have formed at their current location. Astronomers thought this occurred in dense clouds like G0.253+0.016. If not there, then where do the clusters come from?

The astronomers' next step is to study similarly dense clouds around the galactic center. The team has just completed a new survey with the SMA and is continuing another with CARMA. This year, they will also use the Atacama Large Millimeter Array (ALMA) in Chile's Atacama Desert—the largest and most advanced millimeter telescope in the world—to continue their research program, which the ALMA proposal committee has rated a top priority for 2013.

The title of the Astrophysical Journal Letters paper is, "The galactic center cloud G0.253+0.016: a massive dense cloud with low star formation potential." This research was supported by the National Science Foundation.

Writer: 
Marcus Woo
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Faulty Behavior

New earthquake fault models show that "stable" zones may contribute to the generation of massive earthquakes

PASADENA, Calif.—In an earthquake, ground motion is the result of waves emitted when the two sides of a fault move—or slip—rapidly past each other, with an average relative speed of about three feet per second. Not all fault segments move so quickly, however—some slip slowly, through a process called creep, and are considered to be "stable," or not capable of hosting rapid earthquake-producing slip.  One common hypothesis suggests that such creeping fault behavior is persistent over time, with currently stable segments acting as barriers to fast-slipping, shake-producing earthquake ruptures. But a new study by researchers at the California Institute of Technology (Caltech) and the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) shows that this might not be true.

"What we have found, based on laboratory data about rock behavior, is that such supposedly stable segments can behave differently when an earthquake rupture penetrates into them. Instead of arresting the rupture as expected, they can actually join in and hence make earthquakes much larger than anticipated," says Nadia Lapusta, professor of mechanical engineering and geophysics at Caltech and coauthor of the study, published January 9 in the journal Nature.

She and her coauthor, Hiroyuki Noda, a scientist at JAMSTEC and previously a postdoctoral scholar at Caltech, hypothesize that this is what occurred in the 2011 magnitude 9.0 Tohoku-Oki earthquake, which was unexpectedly large.

Fault slip, whether fast or slow, results from the interaction between the stresses acting on the fault and friction, or the fault's resistance to slip. Both the local stress and the resistance to slip depend on a number of factors such as the behavior of fluids permeating the rocks in the earth's crust. So, the research team formulated fault models that incorporate laboratory-based knowledge of complex friction laws and fluid behavior, and developed computational procedures that allow the scientists to numerically simulate how those model faults will behave under stress.

"The uniqueness of our approach is that we aim to reproduce the entire range of observed fault behaviors—earthquake nucleation, dynamic rupture, postseismic slip, interseismic deformation, patterns of large earthquakes—within the same physical model; other approaches typically focus only on some of these phenomena," says Lapusta.

In addition to reproducing a range of behaviors in one model, the team also assigned realistic fault properties to the model faults, based on previous laboratory experiments on rock materials from an actual fault zone—the site of the well-studied 1999 magnitude 7.6 Chi-Chi earthquake in Taiwan.

"In that experimental work, rock materials from boreholes cutting through two different parts of the fault were studied, and their properties were found to be conceptually different," says Lapusta. "One of them had so-called velocity-weakening friction properties, characteristic of earthquake-producing fault segments, and the other one had velocity-strengthening friction, the kind that tends to produce stable creeping behavior under tectonic loading. However, these 'stable' samples were found to be much more susceptible to dynamic weakening during rapid earthquake-type motions, due to shear heating."

Lapusta and Noda used their modeling techniques to explore the consequences of having two fault segments with such lab-determined fault-property combinations. They found that the ostensibly stable area would indeed occasionally creep, and often stop seismic events, but not always. From time to time, dynamic rupture would penetrate that area in just the right way to activate dynamic weakening, resulting in massive slip. They believe that this is what happened in the Chi-Chi earthquake; indeed, the quake's largest slip occurred in what was believed to be the "stable" zone.

"We find that the model qualitatively reproduces the behavior of the 2011 magnitude 9.0 Tohoku-Oki earthquake as well, with the largest slip occurring in a place that may have been creeping before the event," says Lapusta. "All of this suggests that the underlying physical model, although based on lab measurements from a different fault, may be qualitatively valid for the area of the great Tohoku-Oki earthquake, giving us a glimpse into the mechanics and physics of that extraordinary event."

If creeping segments can participate in large earthquakes, it would mean that much larger events than seismologists currently anticipate in many areas of the world are possible. That means, Lapusta says, that the seismic hazard in those areas may need to be reevaluated.

For example, a creeping segment separates the southern and northern parts of California's San Andreas Fault. Seismic hazard assessments assume that this segment would stop an earthquake from propagating from one region to the other, limiting the scope of a San Andreas quake. However, the team's findings imply that a much larger event may be possible than is now anticipated—one that might involve both the Los Angeles and San Francisco metropolitan areas.

"Lapusta and Noda's realistic earthquake fault models are critical to our understanding of earthquakes—knowledge that is essential to reducing the potential catastrophic consequences of seismic hazards," says Ares Rosakis, chair of Caltech's division of engineering and applied science. "This work beautifully illustrates the way that fundamental, interdisciplinary research in the mechanics of seismology at Caltech is having a positive impact on society."

Now that they've been proven to qualitatively reproduce the behavior of the Tohoku-Oki quake, the models may be useful for exploring future earthquake scenarios in a given region, "including extreme events," says Lapusta. Such realistic fault models, she adds, may also be used to study how earthquakes may be affected by additional factors such as man-made disturbances resulting from geothermal energy harvesting and CO2 sequestration. "We plan to further develop the modeling to incorporate realistic fault geometries of specific well-instrumented regions, like Southern California and Japan, to better understand their seismic hazard."

"Creeping fault segments can turn from stable to destructive due to dynamic weakening" appears in the January 9 issue of the journal Nature. Funding for this research was provided by the National Science Foundation; the Southern California Earthquake Center; the Gordon and Betty Moore Foundation; and the Ministry of Education, Culture, Sports, Science and Technology in Japan.

Writer: 
Katie Neith
Frontpage Title: 
Faulty Behavior: “Stable” Zones May Contribute to Massive Earthquakes
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Planets Abound

Caltech-led astronomers estimate that at least 100 billion planets populate the galaxy

PASADENA, Calif.—Look up at the night sky and you'll see stars, sure. But you're also seeing planets—billions and billions of them. At least.

That's the conclusion of a new study by astronomers at the California Institute of Technology (Caltech) that provides yet more evidence that planetary systems are the cosmic norm. The team made their estimate while analyzing planets orbiting a star called Kepler-32—planets that are representative, they say, of the vast majority in the galaxy and thus serve as a perfect case study for understanding how most planets form.

"There's at least 100 billion planets in the galaxy—just our galaxy," says John Johnson, assistant professor of planetary astronomy at Caltech and coauthor of the study, which was recently accepted for publication in the Astrophysical Journal. "That's mind-boggling."

"It's a staggering number, if you think about it," adds Jonathan Swift, a postdoc at Caltech and lead author of the paper. "Basically there's one of these planets per star."

The planetary system in question, which was detected by the NASA's Kepler space telescope, contains five planets. The existence of two of those planets have already been confirmed by other astronomers. The Caltech team confirmed the remaining three, then analyzed the five-planet system and compared it to other systems found by the Kepler mission.

The planets orbit a star that is an M dwarf—a type that accounts for about three-quarters of all stars in the Milky Way. The five planets, which are similar in size to Earth and orbit close to their star, are also typical of the class of planets that the telescope has discovered orbiting other M dwarfs, Swift says. Therefore, the majority of planets in the galaxy probably have characteristics comparable to those of the five planets.

While this particular system may not be unique, what does set it apart is its coincidental orientation: the orbits of the planets lie in a plane that's positioned such that Kepler views the system edge-on. Due to this rare orientation, each planet blocks Kepler-32's starlight as it passes between the star and the Kepler telescope.

By analyzing changes in the star's brightness, the astronomers were able to determine the planets' characteristics, such as their sizes and orbital periods. This orientation therefore provides an opportunity to study the system in great detail—and because the planets represent the vast majority of planets that are thought to populate the galaxy, the team says, the system also can help astronomers better understand planet formation in general.

"I usually try not to call things 'Rosetta stones,' but this is as close to a Rosetta stone as anything I've seen," Johnson says. "It's like unlocking a language that we're trying to understand—the language of planet formation."

One of the fundamental questions regarding the origin of planets is how many of them there are. Like the Caltech group, other teams of astronomers have estimated that there is roughly one planet per star, but this is the first time researchers have made such an estimate by studying M-dwarf systems, the most numerous population of planets known.

To do that calculation, the Caltech team determined the probability that an M-dwarf system would provide Kepler-32's edge-on orientation. Combining that probability with the number of planetary systems Kepler is able to detect, the astronomers calculated that there is, on average, one planet for every one of the approximately 100 billion stars in the galaxy. But their analysis only considers planets that are in close orbits around M dwarfs—not the outer planets of an M-dwarf system, or those orbiting other kinds of stars. As a result, they say, their estimate is conservative. In fact, says Swift, a more accurate estimate that includes data from other analyses could lead to an average of two planets per star.

M-dwarf systems like Kepler-32's are quite different from our own solar system. For one, M dwarfs are cooler and much smaller than the sun. Kepler-32, for example, has half the mass of the sun and half its radius. The radii of its five planets range from 0.8 to 2.7 times that of Earth, and those planets orbit extremely close to their star. The whole system fits within just over a tenth of an astronomical unit (the average distance between Earth and the sun)—a distance that is about a third of the radius of Mercury's orbit around the sun. The fact that M-dwarf systems vastly outnumber other kinds of systems carries a profound implication, according to Johnson, which is that our solar system is extremely rare. "It's just a weirdo," he says.

The fact that the planets in M-dwarf systems are so close to their stars doesn't necessarily mean that they're fiery, hellish worlds unsuitable for life, the astronomers say. Indeed, because M dwarfs are small and cool, their temperate zone—also known as the "habitable zone," the region where liquid water might exist—is also further inward. Even though only the outermost of Kepler-32's five planets lies in its temperate zone, many other M dwarf systems have more planets that sit right in their temperate zones. 

As for how the Kepler-32 system formed, no one knows yet. But the team says its analysis places constraints on possible mechanisms. For example, the results suggest that the planets all formed farther away from the star than they are now, and migrated inward over time.

Like all planets, the ones around Kepler-32 formed from a proto-planetary disk—a disk of dust and gas that clumped up into planets around the star. The astronomers estimated that the mass of the disk within the region of the five planets was about as much as that of three Jupiters. But other studies of proto-planetary disks have shown that three Jupiter masses can't be squeezed into such a tiny area so close to a star, suggesting to the Caltech team that the planets around Kepler-32 initially formed farther out.

Another line of evidence relates to the fact that M dwarfs shine brighter and hotter when they are young, when planets would be forming. Kepler-32 would have been too hot for dust—a key planet-building ingredient—to even exist in such close proximity to the star. Previously, other astronomers had determined that the third and fourth planets from the star are not very dense, meaning that they are likely made of volatile compounds such as carbon dioxide, methane, or other ices and gases, the Caltech team says. However, those volatile compounds could not have existed in the hotter zones close to the star.

Finally, the Caltech astronomers discovered that three of the planets have orbits that are related to one another in a very specific way. One planet's orbital period lasts twice as long as another's, and the third planet's lasts three times as long as the latter's. Planets don't fall into this kind of arrangement immediately upon forming, Johnson says. Instead, the planets must have started their orbits farther away from the star before moving inward over time and settling into their current configuration.

"You look in detail at the architecture of this very special planetary system, and you're forced into saying these planets formed farther out and moved in," Johnson explains.

The implications of a galaxy chock full of planets are far-reaching, the researchers say. "It's really fundamental from an origins standpoint," says Swift, who notes that because M dwarfs shine mainly in infrared light, the stars are invisible to the naked eye. "Kepler has enabled us to look up at the sky and know that there are more planets out there than stars we can see."

In addition to Swift and Johnson, the other authors on the Astrophysical Journal paper are Caltech graduate students Timothy Morton and Benjamin Montet; Caltech postdoc Philip Muirhead; former Caltech postdoc Justin Crepp of the University of Notre Dame; and Caltech alumnus Daniel Fabrycky (BS '03) of the University of Chicago. The title of the paper is, "Characterizing the cool KOIS IV: Kepler-32 as a prototype for the formation of compact planetary systems throughout the galaxy." In addition to using Kepler, the astronomers made observations at the W. M. Keck Observatory and with the Robo-AO system at Palomar Observatory. Support for all of the telescopes was provided by the W. M. Keck Foundation, NASA, Caltech, the Inter-University Centre for Astronomy and Astrophysics, the National Science Foundation, the Mt. Cuba Astronomical Foundation, and Samuel Oschin.

Writer: 
Marcus Woo
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Unlocking New Talents in Nature

Caltech protein engineers create new biocatalysts

PASADENA, Calif.—Protein engineers at the California Institute of Technology (Caltech) have tapped into a hidden talent of one of nature's most versatile catalysts. The enzyme cytochrome P450 is nature's premier oxidation catalyst—a protein that typically promotes reactions that add oxygen atoms to other chemicals. Now the Caltech researchers have engineered new versions of the enzyme, unlocking its ability to drive a completely different and synthetically useful reaction that does not take place in nature. 

The new biocatalysts can be used to make natural products—such as hormones, pheromones, and insecticides—as well as pharmaceutical drugs, like antibiotics, in a "greener" way.

"Using the power of protein engineering and evolution, we can convince enzymes to take what they do poorly and do it really well," says Frances Arnold, the Dick and Barbara Dickinson Professor of Chemical Engineering, Bioengineering and Biochemistry at Caltech and principal investigator on a paper about the enzymes that appears online in Science. "Here, we've asked a natural enzyme to catalyze a reaction that had been devised by chemists but that nature could never do."

Arnold's lab has been working for years with a bacterial cytochrome P450. In nature, enzymes in this family insert oxygen into a variety of molecules that contain either a carbon-carbon double bond or a carbon-hydrogen single bond. Most of these insertions require the formation of a highly reactive intermediate called an oxene.

Arnold and her colleagues Pedro Coelho and Eric Brustad noted that this reaction has a lot in common with another reaction that synthetic chemists came up with to create products that incorporate a cyclopropane—a chemical group containing three carbon atoms arranged in a triangle. Cyclopropanes are a necessary part of many natural-product intermediates and pharmaceuticals, but nature forms them through a complicated series of steps that no chemist would want to replicate.

"Nature has a limited chemical repertoire," Brustad says. "But as chemists, we can create conditions and use reagents and substrates that are not available to the biological world."

The cyclopropanation reaction that the synthetic chemists came up with inserts carbon using intermediates called carbenes, which have an electronic structure similar to oxenes. This reaction provides a direct route to the formation of diverse cyclopropane-containing products that would not be accessible by natural pathways. However, even this reaction is not a perfect solution because some of the solvents needed to run the reaction are toxic, and it is typically driven by catalysts based on expensive transition metals, such as copper and rhodium. Furthermore, tweaking these catalysts to predictably make specific products remains a significant challenge—one the researchers hoped nature could overcome with evolution's help.

Given the similarities between the two reaction systems—cytochrome P450's natural oxidation reactions and the synthetic chemists' cyclopropanation reaction— Arnold and her colleagues argued that it might be possible to convince the bacterial cytochrome P450 to create cyclopropane-bearing compounds through this more direct route.  Their experiments showed that the natural enzyme (cytochrome P450) could in fact catalyze the reaction, but only very poorly; it generated a low yield of products, didn't make the specific mix of products desired, and catalyzed the reaction only a few times. In comparison, transition-metal catalysts can be used hundreds of times. 

That's where protein engineering came in. Over the years, Arnold's lab has created thousands of cytochrome P450 variants by mutating the enzyme's natural sequence of amino acids, using a process called directed evolution. The researchers tested variants from their collections to see how well they catalyzed the cyclopropane-forming reaction. A handful ended up being hits, driving the reaction hundreds of times. 

Being able to catalyze a reaction is a crucial first step, but for a chemical process to be truly useful it has to generate high yields of specific products. Many chemical compounds exist in more than one form, so although the chemical formulas of various products may be identical, they might, for example, be mirror images of each other or have slightly different bonding structures, leading to dissimilar behavior. Therefore, being able to control what forms are produced and in what ratio—a quality called selectivity—is especially important.

Controlling selectivity is difficult. It is something that chemists struggle to do, while nature excels at it. That was another reason Arnold and her team wanted to investigate cytochrome P450's ability to catalyze the reaction.

"We should be able to marry the impressive repertoire of catalysts that chemists have invented with the power of nature to do highly selective chemistry under green conditions," Arnold says.

So the researchers further "evolved" enzyme variants that had worked well in the cyclopropanation reaction, to come up with a spectrum of new enzymes. And those enzymes worked—they were able to drive the reaction many times and produced many of the selectivities a chemist could desire for various substrates.  

Coelho says this work highlights the utility of synthetic chemistry in expanding nature's catalytic potential. "This field is still in its infancy," he says. "There are many more reactions out there waiting to be installed in the biological world."

The paper, "Olefin cyclopropanation via carbene insertion catalyzed by engineered cytochrome P450 enzymes," was also coauthored by Arvind Kannan, now a Churchill Scholar at Cambridge University; Brustad is now an assistant professor at the University of North Carolina at Chapel Hill. The work was supported by a grant from the U.S. Department of Energy and startup funds from UNC Chapel Hill.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news