Visualizing Biological Networks in 4D

A unique microscope invented at Caltech captures the motion of DNA structures in space and time

PASADENA, Calif.—Every great structure, from the Empire State Building to the Golden Gate Bridge, depends on specific mechanical properties to remain strong and reliable. Rigidity—a material's stiffness—is of particular importance for maintaining the robust functionality of everything from colossal edifices to the tiniest of nanoscale structures. In biological nanostructures, like DNA networks, it has been difficult to measure this stiffness, which is essential to their properties and functions. But scientists at the California Institute of Technology (Caltech) have recently developed techniques for visualizing the behavior of biological nanostructures in both space and time, allowing them to directly measure stiffness and map its variation throughout the network.

The new method is outlined in the February 4 early edition of the Proceedings of the National Academy of Sciences (PNAS).

"This type of visualization is taking us into domains of the biological sciences that we did not explore before," says Nobel Laureate Ahmed Zewail, the Linus Pauling Professor of Chemistry and professor of physics at Caltech, who coauthored the paper with Ulrich Lorenz, a postdoctoral scholar in Zewail's lab. "We are providing the methodology to find out—directly—the stiffness of a biological network that has nanoscale properties."

Knowing the mechanical properties of DNA structures is crucial to building sturdy biological networks, among other applications. According to Zewail, this type of visualization of biomechanics in space and time should be applicable to the study of other biological nanomaterials, including the abnormal protein assemblies that underlie diseases like Alzheimer's and Parkinson's.

Zewail and Lorenz were able to see, for the first time, the motion of DNA nanostructures in both space and time using the four-dimensional (4D) electron microscope developed at Caltech's Physical Biology Center for Ultrafast Science and Technology. The center is directed by Zewail, who created it in 2005 to advance understanding of the fundamental physics of chemical and biological behavior.

"In nature, the behavior of matter is determined by its structure—the arrangements of its atoms in the three dimensions of space—and by how the structure changes with time, the fourth dimension," explains Zewail. "If you watch a horse gallop in slow motion, you can follow the time of the gallops, and you can see in detail what, for example, each leg is doing over time. When we get to the nanometer scale, that is a different story—we need to improve the spatial resolution to a billion times that of the horse in order to visualize what is happening."

Zewail was awarded the 1999 Nobel Prize in Chemistry for his development of femtochemistry, which uses ultrashort laser flashes to observe fundamental chemical reactions occurring at the timescale of the femtosecond (one millionth of a billionth of a second). Although femtochemistry can capture atoms and molecules in motion, giving the time dimension, it cannot concurrently show the dimensions of space, and thus the structure of the material. This is because it utilizes laser light with wavelengths that far exceed the dimension of a nanostructure, making it impossible to resolve and image nanoscale details in tiny physical structures such as DNA .

To overcome this major hurdle, the 4D electron microscope employs a stream of individual electrons that scatter off objects to produce an image. The electrons are accelerated to wavelengths of picometers, or trillionths of a meter, providing the capability for visualizing the structure in space with a resolution a thousand times higher than that of a nanostructure, and with a time resolution of femtoseconds or longer.

The experiments reported in PNAS began with a structure created by stretching DNA over a hole embedded in a thin carbon film. Using the electrons in the microscope, several DNA filaments were cut away from the carbon film so that a three-dimensional, free-standing structure was achieved under the 4D microscope.

Next, the scientists employed laser heat to excite oscillations in the DNA structure, which were imaged using the electron pulses as a function of time—the fourth dimension. By observing the frequency and amplitude of these oscillations, a direct measure of stiffness was made.

"It was surprising that we could do this with a complex network," says Zewail. "And yet by cutting and probing, we could go into a selective area of the network and find out about its behavior and properties."

Using 4D electron microscopy, Zewail's group has begun to visualize protein assemblies called amyloids, which are believed to play a role in many neurodegenerative diseases, and they are continuing their investigation of the biomechanical properties of these networks. He says that this technique has the potential for broad applications not only to biological assemblies, but also in the materials science of nanostructures.

Funding for the research outlined in the PNAS paper, "Biomechanics of DNA structures visualized by 4D electron microscopy," was provided by the National Science Foundation and the Air Force Office of Scientific Research. The Physical Biology Center for Ultrafast Science and Technology at Caltech is supported by the Gordon and Betty Moore Foundation.

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Creating New Quantum Building Blocks

Caltech researcher says diamond defects could serve as nodes of a quantum network

PASADENA, Calif.—Scientists have long dreamed of creating a quantum computer—a device rooted in the bizarre phenomena that transpire at the level of the very small, where quantum mechanics rules the scene. It is believed that such new computers could process currently unsolvable problems in seconds.  

Researchers have tried using various quantum systems, such as atoms or ions, as the basic, transistor-like units in simple quantum computation devices. Now, laying the groundwork for an on-chip optical quantum network, a team of researchers, including Andrei Faraon from the California Institute of Technology (Caltech), has shown that defects in diamond can be used as quantum building blocks that interact with one another via photons, the basic units of light.

The device is simple enough—it involves a tiny ring resonator and a tunnel-like optical waveguide, which both funnel light. Both structures, each only a few hundred nanometers wide, are etched in a diamond membrane and positioned close together atop a glass substrate. Within the resonator lies a nitrogen-vacancy center (NV center)—a defect in the structure of diamond in which a nitrogen atom replaces a carbon atom, and in which a nearby spot usually reserved for another carbon atom is simply empty. Such NV centers are photoluminescent, meaning they absorb and emit photons.

"These NV centers are like the building blocks of the network, and we need to make them interact—like having an electrical current connecting one transistor to another," explains Faraon, lead author on a paper describing the work in the New Journal of Physics. "In this case, photons do that job."

In recent years, diamond has become a heavily researched material for use in quantum photonic devices in part because the diamond lattice is able to protect impurities from excessive interactions. The so-called quietness it affords enables impurities—such as NV centers—to store information unaltered for relatively long periods of time.  

To begin their experiment, the researchers first cool the device below 10 Kelvin (−441.67 degrees Fahrenheit) and then shine green laser light on the NV center, causing it to reach an excited state and then emit red light. As the red light circles within the resonator, it constructively interferes with itself, increasing its intensity. Slowly, the light then leaks into the nearby waveguide, which channels the photons out through gratings at either end, scattering the light out of the plane of the chip.

The emitted photons have the property of being correlated, or entangled, with the NV center from which they came. This mysterious quality of entanglement, which makes two quantum states inextricably linked in such a way that any information you learn about one provides information about the other, is a necessary ingredient for quantum computation. It enables a large amount of information to be stored and processed by fewer components that take up a small amount of space.

"Right now we only have one nitrogen-vacancy center that's emitting photons, but in the future we envision creating multiple NV centers that emit photons on the same chip," Faraon says. "By measuring these photons we could create entanglement among multiple NV centers on the chip."

And that's important because, in order to make a quantum computer, you would need millions—maybe billions—of these units. "As you can see, we're just working at making one or a few," Faraon says. "But there are other applications down the line that are easier to achieve." For example, a quantum network with a couple hundred units could simulate the behavior of a complex molecule—a task that conventional computers struggle with.

Going forward, Faraon plans to investigate whether other materials can behave similarly to diamond in an optical quantum network.

In addition to Faraon, the authors on the paper, "Quantum photonic devices in single-crystal diamond," are Charles Santori, Zhihong Huang, Kai-Mei Fu, Victor Acosta, David Fattal, and Raymond Beausoleil of Hewlett-Packard Laboratories, in Palo Alto, California. Fu is now an assistant professor at the University of Washington in Seattle, Washington. The work was supported by the Defense Advanced Research Projects Agency and The Regents of the University of California.  

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech Senior Wins Churchill Scholarship

Caltech senior Andrew Meng has been selected to receive a Churchill Scholarship, which will fund his graduate studies at the University of Cambridge for the next academic year. Meng, a chemistry and physics major, was one of only 14 students nationwide who were chosen to receive the fellowship this year.

Taking full advantage of Caltech's strong tradition of undergraduate research, Meng has worked since his freshman year in the lab of Nate Lewis, the George L. Argyros Professor and professor of chemistry. Over the course of three Summer Undergraduate Research Fellowships (SURFs) and several terms in the lab, Meng has investigated various applications of silicon microwire solar cells. Lewis's group has shown that arrays of these ultrathin wires hold promise as a cost-effective way to construct solar cells that can convert light into electricity with relatively high efficiencies.

Meng, who grew up in Baton Rouge, Louisiana, first studied some of the fundamental limitations of silicon microwires in fuel-forming reactions. In these applications, it is believed that the microwires can harness energy from the sun to drive chemical reactions such as the production of hydrogen and oxygen from splitting water. Meng's work showed that the geometry of the microwires would not limit the fuel-forming reaction as some had expected.

More recently, Meng has turned his attention to using silicon microwires to generate electricity. He is developing an inexpensive electrical contact to silicon microwire chips, using a method that facilitates scale-up and can be applied to flexible solar cells.

"Andrew is one of the best undergraduates that I have had the pleasure of working with in over a decade," says Lewis. "He excels in academics, in leadership, and in research. I believe he is truly worthy of the distinction of receiving a Churchill Fellowship. " 

As he pursues a Master of Philosophy degree in chemistry at the University of Cambridge over the next year, Meng will work in the group of theoretical chemist Michiel Sprik. He plans to apply computational methods to his studies of fuel-forming reactions using solar-energy materials.

"I'm very grateful for this opportunity to learn a computational perspective, since up until now I've been doing experimental work," Meng says. "I'm very excited, and most importantly, I'd like to thank Caltech and all of my mentors and co-mentors, without whom I would not be in this position today."

According to the Winston Churchill Foundation's website, the Churchill Scholarship program "offers American citizens of exceptional ability and outstanding achievement the opportunity to pursue graduate studies in engineering, mathematics, or the sciences at Cambridge. One of the newer colleges at the University of Cambridge, Churchill College was built as the national and Commonwealth tribute to Sir Winston, who in the years after the Second World War presciently recognized the growing importance of science and technology for prosperity and security. Churchill College focuses on the sciences, engineering, and mathematics." The first Churchill Scholarships were awarded in 1963, and this year's recipients bring the total to 479 Churchill Scholars.

Each year, a select group of universities, including Caltech, is eligible to nominate students for consideration for the scholarship. Meng is the seventh Caltech student to have won the award since the year 2000. A group of Caltech faculty members and researchers work with Lauren Stolper, director of fellowships advising, to identify and nominate candidates. This year, the members of the group were Churchill Scholar alumni John Brady, the Chevron Professor of Chemical Engineering and professor of mechanical engineering; Mitchio Okumura, professor of chemical physics; Alan Cummings, senior research scientist; and Eric Rains, professor of mathematics.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Sorting Out Stroking Sensations

Caltech biologists find individual neurons in the skin that react to massage

PASADENA, Calif.—The skin is a human being's largest sensory organ, helping to distinguish between a pleasant contact, like a caress, and a negative sensation, like a pinch or a burn. Previous studies have shown that these sensations are carried to the brain by different types of sensory neurons that have nerve endings in the skin. Only a few of those neuron types have been identified, however, and most of those detect painful stimuli. Now biologists at the California Institute of Technology (Caltech) have identified in mice a specific class of skin sensory neurons that reacts to an apparently pleasurable stimulus.

More specifically, the team, led by David J. Anderson, Seymour Benzer Professor of Biology at Caltech, was able to pinpoint individual neurons that were activated by massage-like stroking of the skin. The team's results are outlined in the January 31 issue of the journal Nature.

"We've known a lot about the neurons that detect things that make us hurt or feel pain, but we've known much less about the identity of the neurons that make us feel good when they are stimulated," says Anderson, who is also an investigator with the Howard Hughes Medical Institute. "Generally it's a lot easier to study things that are painful because animals have evolved to become much more sensitive to things that hurt or are fearful than to things that feel good. Showing a positive influence of something on an animal model is not that easy."

In fact, the researchers had to develop new methods and technologies to get their results. First, Sophia Vrontou, a postdoctoral fellow in Anderson's lab and the lead author of the study, developed a line of genetically modified mice that had tags, or molecular markers, on the neurons that the team wanted to study. Then she placed a molecule in this specific population of neurons that fluoresced, or lit up, when the neurons were activated.

"The next step was to figure out a way of recording those flashes of light in those neurons in an intact mouse while stroking and poking its body," says Anderson. "We took advantage of the fact that these sensory neurons are bipolar in the sense that they send one branch into the skin that detects stimuli, and another branch into the spinal cord to relay the message detected in the skin to the brain."

The team obtained the needed data by placing the mouse under a special microscope with very high magnification and recording the level of fluorescent light in the fibers of neurons in the spinal cord as the animal was stroked, poked, tickled, and pinched. Through a painstaking process of applying stimuli to one tiny area of the animal's body at a time, they were able to confirm that certain neurons lit up only when stroked. A different class of neurons, by contrast, was activated by poking or pinching the skin, but not by stroking.

"Massage-like stroking is a stimulus that, if were we to experience it, would feel good to us, but as scientists we can't just assume that because something feels good to us, it has to also feel good to an animal," says Anderson. "So we then had to design an experiment to show that artificially activating just these neurons—without actually stroking the mouse—felt good to the mouse."

The researchers did this by creating a box that contained left, right, and center rooms connected by little doors. The left and right rooms were different enough that a mouse could distinguish them through smell, sight, and touch. In the left room, the mouse received an injection of a drug that selectively activated the neurons shown to detect massage-like stroking. In the room on the right, the mouse received a control injection of saline. After a few sessions in each outer room, the animal was placed in the center, with the doors open to see which room it preferred. It clearly favored the room where the massage-sensitive neurons were activated. According to Anderson, this was the first time anyone has used this type of conditioned place-preference experiment to show that activating a specific population of neurons in the skin can actually make an animal experience a pleasurable or rewarding state—in effect, to "feel good."

The team's findings are significant for several reasons, he says. First, the methods that they developed give scientists who have discovered a new kind of neuron a way to find out what activates that neuron in the skin.

"Since there are probably dozens of different kinds of neurons that innervate the skin, we hope this will advance the field by making it possible to figure out all of the different kinds of neurons that detect various types of stimuli," explains Anderson. The second reason the results are important, he says, "is that now that we know these neurons detect massage-like stimuli, the results raise new sets of questions about which molecules in those neurons help the animal detect stroking but not poking."

The other benefit of their new methods, Anderson says, is that they will allow researchers to, in principle, trace the circuitry from those neurons up into the brain to ask why and how activating these neurons makes the animal feel good, whereas activating other neurons that are literally right next to them in the skin makes the animal feel bad.

"We are now most interested in how these neurons communicate to the brain through circuits," says Anderson. "In other words, what part of the circuit in the brain is responsible for the good feeling that is apparently produced by activating these neurons? It may seem frivolous to be identifying massage neurons in a mouse, but it could be that some good might come out of this down the road."

Allan M. Wong, a senior research fellow in biology at Caltech, and Kristofer K. Rau and Richard Koerber from the University of Pittsburgh were also coauthors on the Nature paper, "Genetic identification of C fibers that detect massage-like stroking of hairy skin in vivo." Funding for this research was provided by the National Institutes of Health, the Human Frontiers Science Program, and the Helen Hay Whitney Foundation.

Writer: 
Katie Neith
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

TEDxCaltech: Advancing Humanoid Robots

This week we will be highlighting the student speakers who auditioned and were selected to give five-minute talks about their brain-related research at TEDxCaltech: The Brain, a special event that will take place on Friday, January 18, in Beckman Auditorium. 

In the spirit of ideas worth spreading, TED has created a program of local, self-organized events called TEDx. Speakers are asked to give the talk of their lives. Live video coverage of the TEDxCaltech experience will be available during the event at http://tedxcaltech.caltech.edu.

When Matanya Horowitz started his undergraduate work in 2006 at University of Colorado at Boulder, he knew that he wanted to work in robotics—mostly because he was disappointed that technology had not yet made good on his sci-fi–inspired dreams of humanoid robots.

"The best thing we had at the time was the Roomba, which is a great product, but compared to science fiction it seemed really diminutive," says Horowitz. He therefore decided to major in not just electrical engineering, but also economics, applied math, and computer science. "I thought that the answer to better robots would lie somewhere in the middle of these different subjects, and that maybe each one held a different key," he explains.

Now a doctoral student at Caltech—he earned his masters in the same four years as his multiple undergrad degrees—Horowitz is putting his range of academic experience to work in the labs of engineers Joel Burdick and John Doyle to help advance robotics and intelligent systems. As a member of the control and dynamical systems group, he is active in several Defense Advanced Research Projects Agency (DARPA) challenges that seek to develop better control mechanisms for robotic arms, as well as develop humanoid robots that can do human-like tasks in dangerous situations, such as disable bombs or enter nuclear power plants during an emergency. 

But beneficial advances in robotics also bring challenges. Inspired as a kid by the robot tales of Isaac Asimov, Horowitz has long been interested in how society might be affected by robots.

"As I began programming just on my own, I saw how easy it was to create something that at least seemed to act with intelligence," he says. "It was interesting to me that we were so close to humanoid robots and that doing these things was so easy. But we also have all these implications we need to think about."

Horowitz's TEDx talk will explore some of the challenges of building and controlling something that needs to interact in the physical world. He says he's thrilled to have the opportunity to speak at TEDx, not just for the chance to talk to a general audience about his work, but also to hopefully inspire others by his enthusiasm for the field.

"Recently, there has been such a monumental shift from what robots were capable of even just five years ago, and people should be really excited about this," says Horowitz. "We've been hearing about robots for 30, 40 years—they've always been 'right around the corner.' But now we can finally point to one and say, 'Here it is, literally coming around a corner.'"

 

 

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

TEDxCaltech: If You Click a Cookie with a Mouse

This week we will be highlighting the student speakers who auditioned and were selected to give five-minute talks about their brain-related research at TEDxCaltech: The Brain, a special event that will take place on Friday, January 18, in Beckman Auditorium. 

In the spirit of ideas worth spreading, TED has created a program of local, self-organized events called TEDx. Speakers are asked to give the talk of their lives. Live video coverage of the TEDxCaltech experience will be available during the event at http://tedxcaltech.caltech.edu.

When offered spinach or a cookie, how do you decide which to eat? Do you go for the healthy choice or the tasty one? To study the science of decision making, researchers in the lab of Caltech neuroeconomist Antonio Rangel analyze what happens inside people's brains as they choose between various kinds of food. The researchers typically use functional magnetic resonance imaging (fMRI) to measure the changes in oxygen flow through the brain; these changes serve as proxies for spikes or dips in brain activity. Recently, however, investigators have started using a new technique that may better tease out how you choose between the spinach or the cookie—a decision that's often made in a fraction of a second.

While fMRI is a powerful method, it can only measure changes in brain activity down to the scale of a second or so. "That's not fast enough because these decisions are made sometimes within half a second," says Caltech senior Joy Lu, who will be talking about her research in Rangel's lab at TEDx Caltech. Instead of using fMRI, Lu—along with postdoctoral scholar Cendri Hutcherson and graduate student Nikki Sullivan—turned to the standard old computer mouse.

During the experiments—which are preliminary, as the researchers are still conducting and refining them—volunteers rate 250 kinds of food for healthiness and tastiness. The choices range from spinach and cookies to broccoli and chips. Then, the volunteers are given a choice between two of those items, represented by pictures on a computer screen. When they decide which option they want, they click with their mouse. But while they mull over their choices, the paths of their mouse cursor are being tracked—the idea being that the cursor paths may reveal how the volunteers arrive at their final decisions.

For example, if the subject initially feels obligated to be healthy, the cursor may hover over the spinach a moment before finally settling on the cookie. Or, if the person is immediately drawn to the sweet treat before realizing that health is a better choice, the cursor may hover over the cookie first.

Lu, Hutcherson, and Sullivan are using computer models to find cursor-path patterns or trends that may offer insight into the factors that influence such decisions. Do the paths differ between those who value health over taste and those who favor taste more?

Although the researchers are still refining their computer algorithms and continuing their experiments, they have some preliminary results. They found that with many people, for example, the cursor first curves toward one choice before ending up at the other. The time it takes for someone's health consciousness to kick in seems to be longer than the time it takes for people to succumb to cravings for something delicious.

After graduation, Lu plans to go to graduate school in marketing, where she'll use not only neuroscience techniques but also field studies to investigate consumer behavior. She might even compare the two methods. "Using neuroscience in marketing is a very new thing," she says. "That's what draws me toward it. We can't answer all the questions we want to answer just using field studies. You have to look at what's going on in a person's mind."

Writer: 
Marcus Woo
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Research Update: Atomic Motions Help Determine Temperatures Inside Earth

In December 2011, Caltech mineral-physics expert Jennifer Jackson reported that she and a team of researchers had used diamond-anvil cells to compress tiny samples of iron—the main element of the earth's core. By squeezing the samples to reproduce the extreme pressures felt at the core, the team was able to get a closer estimate of the melting point of iron. At the time, the measurements that the researchers made were unprecedented in detail. Now, they have taken that research one step further by adding infrared laser beams to the mix.

The lasers are a source of heat that, when sent through the compressed iron samples, warm them up to the point of melting.  And because the earth's core consists of a solid inner region surrounded by a liquid outer shell, the melting temperature of iron at high pressure provides an important reference point for the temperature distribution within the earth's core.

"This is the first time that anyone has combined Mössbauer spectroscopy and heating lasers to detect melting in compressed samples," says Jackson, a professor of mineral physics at Caltech and lead author of a recent paper in the journal Earth and Planetary Science Letters that outlined the team's new method. "What we found is that iron, compared to previous studies, melts at higher temperatures than what has been reported in the past."

Earlier research by other teams done at similar compressions—around 80 gigapascals—reported a range of possible melting points that topped out around 2600 Kelvin (K). Jackson's latest study indicates an iron melting point at this pressure of approximately 3025 K, suggesting that the earth's core is likely warmer than previously thought.

Knowing more about the temperature, composition, and behavior of the earth's core is essential to understanding the dynamics of the earth's interior, including the processes responsible for maintaining the earth's magnetic field. While iron makes up roughly 90 percent of the core, the rest is thought to be nickel and light elements—like silicon, sulfur, or oxygen—that are alloyed, or mixed, with the iron.

To develop and perform these experiments, Jackson worked closely with the Inelastic X-ray and Nuclear Resonant Scattering Group at the Advanced Photon Source at Argonne National Laboratory in Illinois. By laser heating the iron sample in a diamond-anvil cell and monitoring the dynamics of the iron atoms via a technique called synchrotron Mössbauer spectroscopy (SMS), the researchers were able to pinpoint a melting temperature for iron at a given pressure. The SMS signal is sensitively related to the dynamical behavior of the atoms, and can therefore detect when a group of atoms is in a molten state.

She and her team have begun experiments on iron alloys at even higher pressures, using their new approach.

"What we're working toward is a very tight constraint on the temperature of the earth's core," says Jackson. "A number of important geophysical quantities, such as the movement and expansion of materials at the base of the mantle, are dictated by the temperature of the earth's core."

"Our approach is a very elegant way to look at melting because it takes advantage of the physical principle of recoilless absorption of X-rays by nuclear resonances—the basis of the Mössbauer effect—for which Rudolf Mössbauer was awarded the Nobel Prize in Physics," says Jackson. "This particular approach to study melting has not been done at high pressures until now."

Jackson's findings not only tell us more about our own planet, but could indicate that other planets with iron-rich cores, like Mercury and Mars, may have warmer internal temperatures as well.

Her paper, "Melting of compressed iron by monitoring atomic dynamics," was published in Earth and Planetary Science Letters on January 8, 2013.

Writer: 
Katie Neith
Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

TEDxCaltech: Surmounting the Blood-Brain Barrier

This week we will be highlighting the student speakers who auditioned and were selected to give five-minute talks about their brain-related research at TEDxCaltech: The Brain, a special event that will take place on Friday, January 18, in Beckman Auditorium. 

In the spirit of ideas worth spreading, TED has created a program of local, self-organized events called TEDx. Speakers are asked to give the talk of their lives. Live video coverage of the TEDxCaltech experience will be available during the event at http://tedxcaltech.caltech.edu.

The brain needs its surroundings to be just right. That is, unlike some internal organs, such as the liver, which can process just about anything that comes its way, the brain needs to be protected and to have a chemical environment with the right balance of proteins, sugars, salts, and other metabolites. 

That fact stood out to Caltech MD/PhD candidate and TEDxCaltech speaker Devin Wiley when he was studying medicine at the Keck School of Medicine of USC. "In certain cases, one bacterium detected in the brain can be a medical emergency," he says. "So the microenvironment needs to be highly protected and regulated for the brain to function correctly."

Fortunately, a semipermeable divide, known as the blood-brain barrier, is very good at maintaining such an environment for the brain. This barricade—made up of tightly packed blood-vessel cells—is effective at precisely controlling which molecules get into and out of the brain. Because the blood-brain barrier regulates the molecular traffic into the brain, it presents a significant challenge for anyone wanting to deliver therapeutics to the brain. 

At Caltech, Wiley has been working with his advisor, Mark Davis, the Warren and Katharine Schlinger Professor of Chemical Engineering, to develop a work-around—a way to sneak therapeutics past the barrier and into the brain to potentially treat neurologic diseases such as Alzheimer's and Parkinson's. The scientists' strategy is to deliver large-molecule therapeutics (which are being developed by the Davis lab as well as other research groups) tucked inside nanoparticles that have proteins attached to their surface. These proteins will bind specifically to receptors on the blood-brain barrier, allowing the nanoparticles and their therapeutic cargo to be shuttled across the barrier and released into the brain.

"In essence, this is like a Trojan horse," Wiley explains. "You're tricking the blood-brain barrier into transporting drugs to the brain that normally wouldn't get in."

During his five-minute TEDxCaltech talk on Friday, January 18, Wiley will describe this approach and his efforts to design nanoparticles that can transport and release therapeutics into the brain.

For Wiley, the issue of delivering therapeutics to the brain is more than a fascinating research problem. His grandmother recently passed away from Alzheimer's disease, and his wife's grandmother also suffers from the neurodegenerative disorder.

"This is something that affects a lot of people," Wiley says. "Treatments for cardiovascular diseases, cancer, and infectious diseases are really improving. However, better treatments for brain diseases are not being discovered as quickly. So what are the issues? I want to tell the story of one of them."

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

A Cloudy Mystery

A puzzling cloud near the galaxy's center may hold clues to how stars are born

PASADENA, Calif.—It's the mystery of the curiously dense cloud. And astronomers at the California Institute of Technology (Caltech) are on the case.

Near the crowded galactic center, where billowing clouds of gas and dust cloak a supermassive black hole three million times as massive as the sun—a black hole whose gravity is strong enough to grip stars that are whipping around it at thousands of kilometers per second—one particular cloud has baffled astronomers. Indeed, the cloud, dubbed G0.253+0.016, defies the rules of star formation.

In infrared images of the galactic center, the cloud—which is 30 light-years long—appears as a bean-shaped silhouette against a bright backdrop of dust and gas glowing in infrared light. The cloud's darkness means it is dense enough to block light.

According to conventional wisdom, clouds of gas that are this dense should clump up to create pockets of even denser material that collapse due to their own gravity and eventually form stars. One such gaseous region famed for its prodigious star formation is the Orion Nebula. And yet, although the galactic-center cloud is 25 times denser than Orion, only a few stars are being born there—and even then, they are small. In fact, the Caltech astronomers say, its star-formation rate is 45 times lower than what astronomers might expect from such a dense cloud.

"It's a very dense cloud and it doesn't form any massive stars—which is very weird," says Jens Kauffmann, a senior postdoctoral scholar at Caltech.

In a series of new observations, Kauffmann, along with Caltech postdoctoral scholar Thushara Pillai and Qizhou Zhang of the Harvard-Smithsonian Center for Astrophysics, have discovered why: not only does it lack the necessary clumps of denser gas, but the cloud itself is swirling so fast that it can't settle down to collapse into stars.

The results, which show that star formation may be more complex than previously thought and that the presence of dense gas does not automatically imply a region where such formation occurs, may help astronomers better understand the process.

The team presented their findings—which have been recently accepted for publication in the Astrophysical Journal Letters—at the 221st meeting of the American Astronomical Society in Long Beach, California.

To determine whether the cloud contained clumps of denser gas, called dense cores, the team used the Submillimeter Array (SMA), a collection of eight radio telescopes on top of Mauna Kea in Hawaii. In one possible scenario, the cloud does contain these dense cores, which are roughly 10 times denser than the rest of the cloud, but strong magnetic fields or turbulence in the cloud disturbs them, thus preventing them from turning into full-fledged stars.

However, by observing the dust mixed into the cloud's gas and measuring N2H+—an ion that can only exist in regions of high density and is therefore a marker of very dense gas—the astronomers found hardly any dense cores. "That was very surprising," Pillai says. "We expected to see a lot more dense gas."

Next, the astronomers wanted to see if the cloud is being held together by its own gravity—or if it is swirling so fast that it is on the verge of flying apart. If it is churning too fast, it can't form stars. Using the Combined Array for Research in Millimeter-wave Astronomy (CARMA)—a collection of 23 radio telescopes in eastern California run by a consortium of institutions, of which Caltech is a member—the astronomers measured the velocities of the gas in the cloud and found that it is up to 10 times faster than is normally seen in similar clouds. This particular cloud, the astronomers found, was barely held together by its own gravity. In fact, it may soon fly apart.

The CARMA data revealed yet another surprise: the cloud is full of silicon monoxide (SiO), which is only present in clouds where streaming gas collides with and smashes apart dust grains, releasing the molecule. Typically, clouds only contain a smattering of the compound. It is usually observed when gas flowing out from young stars plows back into the cloud from which the stars were born. But the extensive amount of SiO in the galactic-center cloud suggests that it may consist of two colliding clouds, whose impact sends shockwaves throughout the galactic-center cloud. "To see such shocks on such large scales is very surprising," Pillai says.

G0.253+0.016 may eventually be able to make stars, but to do so, the researchers say, it will need to settle down so that it can build dense cores, a process that could take several hundred thousand years. But during that time, the cloud will have traveled a great distance around the galactic center, and it may crash into other clouds or be yanked apart by the gravitational pull of the galactic center. In such a disruptive environment, the cloud may never give birth to stars.

The findings also further muddle another mystery of the galactic center: the presence of young star clusters. The Arches Cluster, for example, contains about 150 bright, massive, young stars, which only live for a few million years. Because that is too short an amount of time for the stars to have formed elsewhere and migrated to the galactic center, they must have formed at their current location. Astronomers thought this occurred in dense clouds like G0.253+0.016. If not there, then where do the clusters come from?

The astronomers' next step is to study similarly dense clouds around the galactic center. The team has just completed a new survey with the SMA and is continuing another with CARMA. This year, they will also use the Atacama Large Millimeter Array (ALMA) in Chile's Atacama Desert—the largest and most advanced millimeter telescope in the world—to continue their research program, which the ALMA proposal committee has rated a top priority for 2013.

The title of the Astrophysical Journal Letters paper is, "The galactic center cloud G0.253+0.016: a massive dense cloud with low star formation potential." This research was supported by the National Science Foundation.

Writer: 
Marcus Woo
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Faulty Behavior

New earthquake fault models show that "stable" zones may contribute to the generation of massive earthquakes

PASADENA, Calif.—In an earthquake, ground motion is the result of waves emitted when the two sides of a fault move—or slip—rapidly past each other, with an average relative speed of about three feet per second. Not all fault segments move so quickly, however—some slip slowly, through a process called creep, and are considered to be "stable," or not capable of hosting rapid earthquake-producing slip.  One common hypothesis suggests that such creeping fault behavior is persistent over time, with currently stable segments acting as barriers to fast-slipping, shake-producing earthquake ruptures. But a new study by researchers at the California Institute of Technology (Caltech) and the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) shows that this might not be true.

"What we have found, based on laboratory data about rock behavior, is that such supposedly stable segments can behave differently when an earthquake rupture penetrates into them. Instead of arresting the rupture as expected, they can actually join in and hence make earthquakes much larger than anticipated," says Nadia Lapusta, professor of mechanical engineering and geophysics at Caltech and coauthor of the study, published January 9 in the journal Nature.

She and her coauthor, Hiroyuki Noda, a scientist at JAMSTEC and previously a postdoctoral scholar at Caltech, hypothesize that this is what occurred in the 2011 magnitude 9.0 Tohoku-Oki earthquake, which was unexpectedly large.

Fault slip, whether fast or slow, results from the interaction between the stresses acting on the fault and friction, or the fault's resistance to slip. Both the local stress and the resistance to slip depend on a number of factors such as the behavior of fluids permeating the rocks in the earth's crust. So, the research team formulated fault models that incorporate laboratory-based knowledge of complex friction laws and fluid behavior, and developed computational procedures that allow the scientists to numerically simulate how those model faults will behave under stress.

"The uniqueness of our approach is that we aim to reproduce the entire range of observed fault behaviors—earthquake nucleation, dynamic rupture, postseismic slip, interseismic deformation, patterns of large earthquakes—within the same physical model; other approaches typically focus only on some of these phenomena," says Lapusta.

In addition to reproducing a range of behaviors in one model, the team also assigned realistic fault properties to the model faults, based on previous laboratory experiments on rock materials from an actual fault zone—the site of the well-studied 1999 magnitude 7.6 Chi-Chi earthquake in Taiwan.

"In that experimental work, rock materials from boreholes cutting through two different parts of the fault were studied, and their properties were found to be conceptually different," says Lapusta. "One of them had so-called velocity-weakening friction properties, characteristic of earthquake-producing fault segments, and the other one had velocity-strengthening friction, the kind that tends to produce stable creeping behavior under tectonic loading. However, these 'stable' samples were found to be much more susceptible to dynamic weakening during rapid earthquake-type motions, due to shear heating."

Lapusta and Noda used their modeling techniques to explore the consequences of having two fault segments with such lab-determined fault-property combinations. They found that the ostensibly stable area would indeed occasionally creep, and often stop seismic events, but not always. From time to time, dynamic rupture would penetrate that area in just the right way to activate dynamic weakening, resulting in massive slip. They believe that this is what happened in the Chi-Chi earthquake; indeed, the quake's largest slip occurred in what was believed to be the "stable" zone.

"We find that the model qualitatively reproduces the behavior of the 2011 magnitude 9.0 Tohoku-Oki earthquake as well, with the largest slip occurring in a place that may have been creeping before the event," says Lapusta. "All of this suggests that the underlying physical model, although based on lab measurements from a different fault, may be qualitatively valid for the area of the great Tohoku-Oki earthquake, giving us a glimpse into the mechanics and physics of that extraordinary event."

If creeping segments can participate in large earthquakes, it would mean that much larger events than seismologists currently anticipate in many areas of the world are possible. That means, Lapusta says, that the seismic hazard in those areas may need to be reevaluated.

For example, a creeping segment separates the southern and northern parts of California's San Andreas Fault. Seismic hazard assessments assume that this segment would stop an earthquake from propagating from one region to the other, limiting the scope of a San Andreas quake. However, the team's findings imply that a much larger event may be possible than is now anticipated—one that might involve both the Los Angeles and San Francisco metropolitan areas.

"Lapusta and Noda's realistic earthquake fault models are critical to our understanding of earthquakes—knowledge that is essential to reducing the potential catastrophic consequences of seismic hazards," says Ares Rosakis, chair of Caltech's division of engineering and applied science. "This work beautifully illustrates the way that fundamental, interdisciplinary research in the mechanics of seismology at Caltech is having a positive impact on society."

Now that they've been proven to qualitatively reproduce the behavior of the Tohoku-Oki quake, the models may be useful for exploring future earthquake scenarios in a given region, "including extreme events," says Lapusta. Such realistic fault models, she adds, may also be used to study how earthquakes may be affected by additional factors such as man-made disturbances resulting from geothermal energy harvesting and CO2 sequestration. "We plan to further develop the modeling to incorporate realistic fault geometries of specific well-instrumented regions, like Southern California and Japan, to better understand their seismic hazard."

"Creeping fault segments can turn from stable to destructive due to dynamic weakening" appears in the January 9 issue of the journal Nature. Funding for this research was provided by the National Science Foundation; the Southern California Earthquake Center; the Gordon and Betty Moore Foundation; and the Ministry of Education, Culture, Sports, Science and Technology in Japan.

Writer: 
Katie Neith
Frontpage Title: 
Faulty Behavior: “Stable” Zones May Contribute to Massive Earthquakes
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news