Tipping the Balance of Behavior

Humans with autism often show a reduced frequency of social interactions and an increased tendency to engage in repetitive solitary behaviors. Autism has also been linked to dysfunction of the amygdala, a brain structure involved in processing emotions. Now Caltech researchers have discovered antagonistic neuron populations in the mouse amygdala that control whether the animal engages in social behaviors or asocial repetitive self-grooming. This discovery may have implications for understanding neural circuit dysfunctions that underlie autism in humans.

This discovery, which is like a "seesaw circuit," was led by postdoctoral scholar Weizhe Hong in the laboratory of David J. Anderson, the Seymour Benzer Professor of Biology at Caltech and an investigator with the Howard Hughes Medical Institute. The work was published online on September 11 in the journal Cell

"We know that there is some hierarchy of behaviors, and they interact with each other because the animal can't exhibit both social and asocial behaviors at the same time. In this study, we wanted to figure out how the brain does that," Anderson says.

Anderson and his colleagues discovered two intermingled but distinct populations of neurons in the amygdala, a part of the brain that is involved in innate social behaviors. One population promotes social behaviors, such as mating, fighting, or social grooming, while the other population controls repetitive self-grooming—an asocial behavior.

Interestingly, these two populations are distinguished according to the most fundamental subdivision of neuron subtypes in the brain: the "social neurons" are inhibitory neurons (which release the neurotransmitter GABA, or gamma-aminobutyric acid), while the "self-grooming neurons" are excitatory neurons (which release the neurotransmitter glutamate, an amino acid).

To study the relationship between these two cell types and their associated behaviors, the researchers used a technique called optogenetics. In optogenetics, neurons are genetically altered so that they express light-sensitive proteins from microbial organisms. Then, by shining a light on these modified neurons via a tiny fiber optic cable inserted into the brain, researchers can control the activity of the cells as well as their associated behaviors.

Using this optogenetic approach, Anderson's team was able to selectively switch on the neurons associated with social behaviors and those linked with asocial behaviors.

With the social neurons, the behavior that was elicited depended upon the intensity of the light signal. That is, when high-intensity light was used, the mice became aggressive in the presence of an intruder mouse. When lower-intensity light was used, the mice no longer attacked, although they were still socially engaged with the intruder—either initiating mating behavior or attempting to engage in social grooming.

When the neurons associated with asocial behavior were turned on, the mouse began self-grooming behaviors such as paw licking and face grooming while completely ignoring all intruders. The self-grooming behavior was repetitive and lasted for minutes even after the light was turned off.

The researchers could also use the light-activated neurons to stop the mice from engaging in particular behaviors. For example, if a lone mouse began spontaneously self-grooming, the researchers could halt this behavior through the optogenetic activation of the social neurons. Once the light was turned off and the activation stopped, the mouse would return to its self-grooming behavior.

Surprisingly, these two groups of neurons appear to interfere with each other's function: the activation of social neurons inhibits self-grooming behavior, while the activation of self-grooming neurons inhibits social behavior. Thus these two groups of neurons seem to function like a seesaw, one that controls whether mice interact with others or instead focus on themselves. It was completely unexpected that the two groups of neurons could be distinguished by whether they were excitatory or inhibitory. "If there was ever an experiment that 'carves nature at its joints,'" says Anderson, "this is it."

This seesaw circuit, Anderson and his colleagues say, may have some relevance to human behavioral disorders such as autism.

"In autism," Anderson says, "there is a decrease in social interactions, and there is often an increase in repetitive, sometimes asocial or self-oriented, behaviors"—a phenomenon known as perseveration. "Here, by stimulating a particular set of neurons, we are both inhibiting social interactions and promoting these perseverative, persistent behaviors."

Studies from other laboratories have shown that disruptions in genes implicated in autism show a similar decrease in social interaction and increase in repetitive self-grooming behavior in mice, Anderson says. However, the current study helps to provide a needed link between gene activity, brain activity, and social behaviors, "and if you don't understand the circuitry, you are never going to understand how the gene mutation affects the behavior." Going forward, he says, such a complete understanding will be necessary for the development of future therapies.

But could this concept ever actually be used to modify a human behavior?

"All of this is very far away, but if you found the right population of neurons, it might be possible to override the genetic component of a behavioral disorder like autism, by just changing the activity of the circuits—tipping the balance of the see-saw in the other direction," he says.

The work was funded by the Simons Foundation, the National Institutes of Health and the Howard Hughes Medical Institute. Caltech coauthors on the paper include Hong, who was the lead author, and graduate student Dong-Wook Kim.

Exclude from News Hub: 

Textbook Theory Behind Volcanoes May Be Wrong

In the typical textbook picture, volcanoes, such as those that are forming the Hawaiian islands, erupt when magma gushes out as narrow jets from deep inside Earth. But that picture is wrong, according to a new study from researchers at Caltech and the University of Miami in Florida.

New seismology data are now confirming that such narrow jets don't actually exist, says Don Anderson, the Eleanor and John R. McMillian Professor of Geophysics, Emeritus, at Caltech. In fact, he adds, basic physics doesn't support the presence of these jets, called mantle plumes, and the new results corroborate those fundamental ideas.

"Mantle plumes have never had a sound physical or logical basis," Anderson says. "They are akin to Rudyard Kipling's 'Just So Stories' about how giraffes got their long necks."

Anderson and James Natland, a professor emeritus of marine geology and geophysics at the University of Miami, describe their analysis online in the September 8 issue of the Proceedings of the National Academy of Sciences.

According to current mantle-plume theory, Anderson explains, heat from Earth's core somehow generates narrow jets of hot magma that gush through the mantle and to the surface. The jets act as pipes that transfer heat from the core, and how exactly they're created isn't clear, he says. But they have been assumed to exist, originating near where the Earth's core meets the mantle, almost 3,000 kilometers underground—nearly halfway to the planet's center. The jets are theorized to be no more than about 300 kilometers wide, and when they reach the surface, they produce hot spots.  

While the top of the mantle is a sort of fluid sludge, the uppermost layer is rigid rock, broken up into plates that float on the magma-bearing layers. Magma from the mantle beneath the plates bursts through the plate to create volcanoes. As the plates drift across the hot spots, a chain of volcanoes forms—such as the island chains of Hawaii and Samoa.

"Much of solid-Earth science for the past 20 years—and large amounts of money—have been spent looking for elusive narrow mantle plumes that wind their way upward through the mantle," Anderson says.

To look for the hypothetical plumes, researchers analyze global seismic activity. Everything from big quakes to tiny tremors sends seismic waves echoing through Earth's interior. The type of material that the waves pass through influences the properties of those waves, such as their speeds. By measuring those waves using hundreds of seismic stations installed on the surface, near places such as Hawaii, Iceland, and Yellowstone National Park, researchers can deduce whether there are narrow mantle plumes or whether volcanoes are simply created from magma that's absorbed in the sponge-like shallower mantle.

No one has been able to detect the predicted narrow plumes, although the evidence has not been conclusive. The jets could have simply been too thin to be seen, Anderson says. Very broad features beneath the surface have been interpreted as plumes or super-plumes, but, still, they're far too wide to be considered narrow jets.

But now, thanks in part to more seismic stations spaced closer together and improved theory, analysis of the planet's seismology is good enough to confirm that there are no narrow mantle plumes, Anderson and Natland say. Instead, data reveal that there are large, slow, upward-moving chunks of mantle a thousand kilometers wide.

In the mantle-plume theory, Anderson explains, the heat that is transferred upward via jets is balanced by the slower downward motion of cooled, broad, uniform chunks of mantle. The behavior is similar to that of a lava lamp, in which blobs of wax are heated from below and then rise before cooling and falling. But a fundamental problem with this picture is that lava lamps require electricity, he says, and that is an outside energy source that an isolated planet like Earth does not have.  

The new measurements suggest that what is really happening is just the opposite: Instead of narrow jets, there are broad upwellings, which are balanced by narrow channels of sinking material called slabs. What is driving this motion is not heat from the core, but cooling at Earth's surface. In fact, Anderson says, the behavior is the regular mantle convection first proposed more than a century ago by Lord Kelvin. When material in the planet's crust cools, it sinks, displacing material deeper in the mantle and forcing it upward.

"What's new is incredibly simple: upwellings in the mantle are thousands of kilometers across," Anderson says. The formation of volcanoes then follows from plate tectonics—the theory of how Earth's plates move and behave. Magma, which is less dense than the surrounding mantle, rises until it reaches the bottom of the plates or fissures that run through them. Stresses in the plates, cracks, and other tectonic forces can squeeze the magma out, like how water is squeezed out of a sponge. That magma then erupts out of the surface as volcanoes. The magma comes from within the upper 200 kilometers of the mantle and not thousands of kilometers deep, as the mantle-plume theory suggests.

"This is a simple demonstration that volcanoes are the result of normal broad-scale convection and plate tectonics," Anderson says. He calls this theory "top-down tectonics," based on Kelvin's initial principles of mantle convection. In this picture, the engine behind Earth's interior processes is not heat from the core but cooling at the planet's surface. This cooling and plate tectonics drives mantle convection, the cooling of the core, and Earth's magnetic field. Volcanoes and cracks in the plate are simply side effects.

The results also have an important consequence for rock compositions—notably the ratios of certain isotopes, Natland says. According to the mantle-plume idea, the measured compositions derive from the mixing of material from reservoirs separated by thousands of kilometers in the upper and lower mantle. But if there are no mantle plumes, then all of that mixing must have happened within the upwellings and nearby mantle in Earth's top 1,000 kilometers.

The paper is titled "Mantle updrafts and mechanisms of oceanic volcanism."

Exclude from News Hub: 
News Type: 
Research News

Seeing Protein Synthesis in the Field

Caltech researchers have developed a novel way to visualize proteins generated by microorganisms in their natural environment—including the murky waters of Caltech's lily pond, as in this image created by Professor of Geobiology Victoria Orphan and her colleagues. The method could give scientists insights to how uncultured microbes (organisms that may not easily be grown in the lab) react and adapt to environmental stimuli over space and time.

The visualization technique, dubbed BONCAT (for "bioorthogonal non-canonical amino-acid tagging"), was developed by David Tirrell, Caltech's Ross McCollum–William H. Corcoran Professor and professor of chemistry and chemical engineering. BONCAT uses "non-canonical" amino acids—synthetic molecules that do not normally occur in proteins found in nature and that carry particular chemical tags that can attach (or "click") onto a fluorescent dye. When these artificial amino acids are incubated with environmental samples, like lily-pond water, they are taken up by microorganisms and incorporated into newly formed proteins. Adding the fluorescent dye to the mix allows these proteins to be visualized within the cell.

For example, in the image, the entire microbial community in the pond water is stained blue with a DNA dye; freshwater gammaproteobacteria are labeled with a fluorescently tagged short-chain ribosomal RNA probe, in red; and newly created proteins are dyed green by BONCAT. The cells colored green and orange in the composite image, then, show those bacteria—gammaproteobacteria and other rod-shaped cells—that are actively making proteins.

"You could apply BONCAT to almost any type of sample," Orphan says. "When you have an environmental sample, you don't know which microorganisms are active. So, assume you're interested in looking at organisms that respond to methane. You could take a sample, provide methane, add the synthetic amino acid, and ask which cells over time showed activity—made new proteins—in the presence of methane relative to samples without methane. Then you can start to sort those organisms out, and possibly use this to determine protein turnover times. These questions are not typically tractable with uncultured organisms in the environment." Orphan's lab is also now using BONCAT on samples of deep-sea sediment in which mixed groups of bacteria and archaea catalyze the anaerobic oxidation of methane.

Why sample the Caltech lily pond? Roland Hatzenpichler, a postdoctoral scholar in Orphan's lab, explains: "When I started applying BONCAT on environmental samples, I wanted to try this new approach on samples that are both interesting from a microbiological standpoint, as well as easily accessible. Samples from the lily pond fit those criteria." Hatzenpichler is lead author of a study describing BONCAT that appeared as the cover story of the August issue of the journal Environmental Microbiology.

The work is supported by the Gordon and Betty Moore Foundation Marine Microbiology Initiative.

Exclude from News Hub: 
News Type: 
Research News

Programmed to Fold: RNA Origami

Researchers from Aarhus University in Denmark and Caltech have developed a new method for organizing molecules on the nanoscale. Inspired by techniques used for folding DNA origami—first invented by Paul Rothemund, a senior research associate in computation and neural systems in the Division of Engineering and Applied Science at Caltech—the team, which includes Rothemund, has fabricated complicated shapes from DNA's close chemical cousin, RNA.

Unlike DNA origami, whose components are chemically synthesized and then folded in an artificial heating and cooling process, RNA origami are synthesized enzymatically and fold up as they are being synthesized, which takes place under more natural conditions compatible with living cells. These features of RNA origami may allow designer RNA structures to be grown within living cells, where they might be used to organize cellular enzymes into biochemical factories.

"The parts for a DNA origami cannot easily be written into the genome of an organism. An RNA origami, on the other hand, can be represented as a DNA gene, which in cells is transcribed into RNA by a protein machine called RNA polymerase," explains Rothemund.

So far, the researchers have demonstrated their method by designing RNA molecules that fold into rectangles and then further assemble themselves into larger honeycomb patterns. This approach was taken to make the shapes recognizable using an atomic force microscope, but many other shapes should be realizable.

A paper describing the research appears in the August 15 issue of the journal Science.

"What is unique about the method is that the folding recipe is encoded into the molecule itself, through its sequence." explains first author Cody Geary, a postdoctoral scholar at Aarhus University.

In other words, the sequence of the RNAs defines both the final shape, and the order in which different parts of the shape fold. The particular RNA sequences that were folded in the experiment were designed using software called NUPACK, created in the laboratory of Caltech professor Niles Pierce. Both the Rothemund and Pierce labs are funded by a National Science Foundation Molecular Programming Project (MPP) Expeditions in Computing grant.

"Our latest research is an excellent example of how tools developed by one part of the MPP are being used by another," says Rothemund.

"RNA has a richer structural and functional repertoire than DNA, and so I am especially interested in how complex biological motifs with special 3-D geometries or protein-binding regions can be added to the basic architecture of RNA origami," says Geary, who completed his BS in chemistry at Caltech in 2003.

The project began with an extended visit by Geary and corresponding author Ebbe Andersen, also from Aarhus University, to Rothemund's Caltech lab.

"RNA origami is still in its infancy," says Rothemund. "Nevertheless, I believe that RNA origami, because of their potential to be manufactured by cells, and because of the extra functionality possible with RNA, will have at least as big an impact as DNA origami."

Rothemund (BS '94) reported the original method for DNA origami in 2006 in the journal Nature. Since then, the work has been cited over 2,000 times and DNA origami have been made in over 50 labs worldwide for potential applications such as drug delivery vehicles and molecular computing.

"The payoff is that unlike DNA origami, which are expensive and have to be made outside of cells, RNA origami should be able to be grown cheaply in large quantities, simply by growing bacteria with genes for them," he adds. "Genes and bacteria cost essentially nothing to share, and so RNA origami will be easily exchanged between scientists."


Katie Neith
Exclude from News Hub: 
News Type: 
Research News

Study of Aerosols Stands to Improve Climate Models

Aerosols, tiny particles in the atmosphere, play a significant role in Earth's climate, scattering and absorbing incoming sunlight and affecting the formation and properties of clouds. Currently, the effect that these aerosols have on clouds represents the largest uncertainty among all influences on climate change.

But now researchers from Caltech and the Jet Propulsion Laboratory have provided a global observational study of the effect that changes in aerosol levels have on low-level marine clouds—the clouds that have the largest impact on the amount of incoming sunlight that Earth reflects back into space. The findings appear in the advance online version of the journal Nature Geoscience.

Changes in aerosol levels have two main effects—they alter the amount of clouds in the atmosphere and they change the internal properties of those clouds. Using measurements from several of NASA's Earth-monitoring satellites from August 2006 through April 2011, the researchers quantified for the first time these two effects from 7.3 million individual data points.

"If you combine these two effects, you get an aerosol influence almost twice that estimated in the latest report from the Intergovernmental Panel on Climate Change," says John Seinfeld, the Louis E. Nohl Professor and professor of chemical engineering at Caltech. "These results offer unique guidance on how warm cloud processes should be incorporated in climate models with changing aerosol levels."

The lead author of the paper, "Satellite-based estimate of global aerosol-cloud radiative forcing by marine warm clouds," is Yi-Chun Chen (Ph.D. '13), a NASA postdoctoral fellow at JPL. Additional coauthors are Matthew W. Christensen of JPL and Colorado State University and Graeme L. Stephens, director of the Center for Climate Sciences at JPL. The work was supported by funding from NASA and the Office of Naval Research.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Looking Forward to 2020 . . . on Mars

A Q&A With Project Scientist Ken Farley

While the Curiosity rover continues to interrogate Gale Crater on Mars, planning is well under way for its successor—another rover that is currently referred to as Mars 2020. The new robotic explorer, scheduled to launch in 2020, will use much of the same technology (even some of the spare parts Curiosity left behind on Earth) to get to the Red Planet. Once there, it will pursue a new set of scientific objectives including the careful collection and storage (referred to as "caching") of compelling samples that might one day be returned to Earth by a future mission. Today, NASA announced the selection of seven scientific instruments that Mars 2020 will carry with it to Mars.

Ken Farley, Caltech's W.M. Keck Foundation Professor of Geochemistry and chair of the Division of Geological and Planetary Sciences, is serving as project scientist for Mars 2020. We recently sat down with him to talk about the mission and his new role.


Congratulations on being selected project scientist for this exciting mission. For those of us who do not know exactly what a project scientist does, can you give us a little overview of the job?

Sure. Conveniently, NASA has a definition, which says that the project scientist is responsible for the overall scientific success of the mission. That's a pretty concise explanation, but it encompasses a lot. My main duty thus far has been helping to define the science needs for equipment that we are going to send to Mars. So while we haven't actually done any science yet, we have had to make a lot of design decisions that are related to the science.

The easiest place to illustrate this is in the discussion of what is necessary, from the science point of view, in terms of the samples that we will cache. We have to consider things like how much mass we need to bring back, what kind of magnetic fields and temperatures the samples are going to be exposed to, and how much contamination of different chemical constituents we can allow. Every one of those questions drives a design decision in how you build the drilling system and the caching system. And if you get those wrong, there's nothing you can do. So there's a lot of thought that has to be put into that, and I convey a lot of that information to the engineers.

Now that we have a science team, I will be helping to facilitate all of its investigations and helping the members to work as a team. MSL [the Mars Science Laboratory, Curiosity's mission] is demonstrating how you have to operate when you have a complex tool (a rover) and a bunch of sensors, and every day you have to figure out what you're going to do to further science. The team has to pull together, pool all of its information, and come up with a plan, so an important part of my job will be figuring out how to manage the team dynamics to keep everybody moving forward and not fragmenting.


What aspects of the job were particularly appealing to you?

One of the parts of being a division chair that I have really enjoyed is being engaged with something that's bigger than my own research. And there's definitely a lot of that on 2020. It's a huge undertaking. There are not many science projects of this scale to be associated closely with, so this just seemed like a really good opportunity.

The kinds of questions that 2020 is going after—they're really big questions. You could never answer them on your own. The key objective is about life—is there or was there ever life on Mars, and more broadly what does its presence or absence mean about the frequency and evolution of life within the universe? There's no way you could answer these questions on Earth. The simple reason for that is that Mars is covered by rocks that are of the era in which, at least on our planet, we believe life was evolving. There are almost no rocks left of that age on the earth, and the ones that are left have been really badly beaten up. So Mars is a place where you really stand a chance of answering these questions in a way that you probably can't anywhere else.

It's not the kind of science I'm usually associated with, but the mission is trying to address truly profound scientific questions.


As you said, space has not been the focus of your research for most of your career. Can you talk a bit about how a terrestrial geochemist like yourself wound up in this role on a Mars mission?

Several years ago, I participated in a workshop about quantifying martian stratigraphy, which was hosted by the Keck Institute for Space Studies [KISS]. One of the topics that was discussed was geochronology—the dating of rocks and other materials—on other planetary bodies, like Mars. This is important for establishing the history of a planet and is particularly challenging because it requires such exacting measurements. After interacting with some people who are now my JPL collaborators at the workshop, it seemed like we might be able to do something special that would help solve this problem. And we got support from KISS to do a follow-on study.

As I was getting deeper and deeper into thinking about how we could do this on Mars, John Grotzinger (the Fletcher Jones Professor of Geology at Caltech and project scientist for MSL) was conducting the landing-site workshops for MSL. He would say things like, "Oh, it would be really great if we could date this." And we'd agree. Then there was a call for participating scientists on MSL. I had no background whatsoever in this, but I knew there was a mass spectrometer on Curiosity. That's one of the analytical instruments we need to make these dating measurements because it allows us to determine the relative abundances of various isotopes in a sample. Since those isotopes are produced at known rates, their abundances tell us something about the age of the sample. So I wrote a proposal basically saying let's see if we can make Curiosity's mass spectrometer work for this purpose. And it did.


What do you think led to your selection as project scientist?

Although I don't have a long track record in studying Mars, this mission is possibly the first step in bringing samples back to Earth. In order to do that, you have to answer a lot of questions related to geochemistry, which is my specialty. The geochemistry community is not ordinarily thinking about rocks coming back from Mars. I happen to have enough crossover between what I know about Mars from the work I just described and my background from working in geochemistry labs, especially those working with the type of very small samples we might get back from Mars, to be a good fit.


Given Curiosity's success on Mars, why is it important and exciting for us to be sending another rover to the Red Planet?

One thing to realize is that the surface of Mars is more or less equivalent in size to the entire continental surface area of the earth, and we've been to just a few points. It's naturally tempting to look at the few places we have been on Mars and draw grand conclusions from them, but you could imagine if you landed in the middle of the Sahara Desert and studied the earth, you would come up with different answers than if you landed in the Amazon, for example. So that's part of it.

But the big thing that distinguishes Mars 2020 is the fact that we are preparing this cache, which is the first step in a process that will hopefully bring samples back to Earth some day. It's very clear that from the science community's point of view, this is a critical motivation for this mission.


How has the experience been working on the mission thus far?

I enjoy it very much. It's extremely different to go from a lab group of two or three people to a project that, at the end of the day, is going to have spent $1.5 billion over the next seven or eight years. It's a completely different scale of operation.

I find it really fascinating to see how everything works. I've spent my entire career among scientists. Suddenly transitioning and working with engineers is interesting because their approach and style is completely different. But they're all extremely good at what they do.

It's a lot of fun to work with these people and to face completely new and unexpected challenges. You never know what new thing is going to pop up.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Biology Made Simpler With "Clear" Tissues

In general, our knowledge of biology—and much of science in general—is limited by our ability to actually see things. Researchers who study developmental problems and disease, in particular, are often limited by their inability to look inside an organism to figure out exactly what went wrong and when.

Now, thanks to techniques developed at Caltech, scientists can see through tissues, organs, and even an entire body. The techniques offer new insight into the cell-by-cell makeup of organisms—and the promise of novel diagnostic medical applications.

"Large volumes of tissue are not optically transparent—you can't see through them," says Viviana Gradinaru (BS '05), an assistant professor of biology at Caltech and the principal investigator whose team has developed the new techniques, which are explained in a paper appearing in the journal Cell. Lipids throughout cells provide structural support, but they also prevent light from passing through the cells. "So, if we need to see individual cells within a large volume of tissue"—within a mouse kidney, for example, or a human tumor biopsy—"we have to slice the tissue very thin, separately image each slice with a microscope, and put all of the images back together with a computer. It's a very time-consuming process and it is error prone, especially if you look to map long axons or sparse cell populations such as stem cells or tumor cells," she says.

The researchers came up with a way to circumvent this long process by making an organism's entire body clear, so that it can be peered through—in 3-D—using standard optical methods such as confocal microscopy.

The new approach builds off a technique known as CLARITY that was previously developed by Gradinaru and her collaborators to create a transparent whole-brain specimen. With the CLARITY method, a rodent brain is infused with a solution of lipid-dissolving detergents and hydrogel—a water-based polymer gel that provides structural support—thus "clearing" the tissue but leaving its three-dimensional architecture intact for study.

The refined technique optimizes the CLARITY concept so that it can be used to clear other organs besides the brain, and even whole organisms. By making clever use of an organism's own network of blood vessels, Gradinaru and her colleagues—including scientific researcher Bin Yang and postdoctoral scholar Jennifer Treweek, coauthors on the paper—can quickly deliver the lipid-dissolving hydrogel and chemical solution throughout the body.

Gradinaru and her colleagues have dubbed this new technique PARS, or perfusion-assisted agent release in situ.

Once an organ or whole body has been made transparent, standard microscopy techniques can be used to easily look through a thick mass of tissue to view single cells that are genetically marked with fluorescent proteins. Even without such genetically introduced fluorescent proteins, however, the PARS technique can be used to deliver stains and dyes to individual cell types of interest. When whole-body clearing is not necessary the method works just as well on individual organs by using a technique called PACT, short for passive clarity technique.

To find out if stripping the lipids from cells also removes other potential molecules of interest—such as proteins, DNA, and RNA—Gradinaru and her team collaborated with Long Cai, an assistant professor of chemistry at Caltech, and his lab. The two groups found that strands of RNA are indeed still present and can be detected with single-molecule resolution in the cells of the transparent organisms.

The Cell paper focuses on the use of PACT and PARS as research tools for studying disease and development in research organisms. However, Gradinaru and her UCLA collaborator Rajan Kulkarni, have already found a diagnostic medical application for the methods. Using the techniques on a biopsy from a human skin tumor, the researchers were able to view the distribution of individual tumor cells within a tissue mass. In the future, Gradinaru says, the methods could be used in the clinic for the rapid detection of cancer cells in biopsy samples.

The ability to make an entire organism transparent while retaining its structural and genetic integrity has broad-ranging applications, Gradinaru says. For example, the neurons of the peripheral nervous system could be mapped throughout a whole body, as could the distribution of viruses, such as HIV, in an animal model.

Gradinaru also leads Caltech's Beckman Institute BIONIC center for optogenetics and tissue clearing and plans to offer training sessions to researchers interested in learning how to use PACT and PARS in their own labs.

"I think these new techniques are very practical for many fields in biology," she says. "When you can just look through an organism for the exact cells or fine axons you want to see—without slicing and realigning individual sections—it frees up the time of the researcher. That means there is more time to the answer big questions, rather than spending time on menial jobs."

Exclude from News Hub: 
News Type: 
Research News

Future Electronics May Depend on Lasers, Not Quartz

Nearly all electronics require devices called oscillators that create precise frequencies—frequencies used to keep time in wristwatches or to transmit reliable signals to radios. For nearly 100 years, these oscillators have relied upon quartz crystals to provide a frequency reference, much like a tuning fork is used as a reference to tune a piano. However, future high-end navigation systems, radar systems, and even possibly tomorrow's consumer electronics will require references beyond the performance of quartz.

Now, researchers in the laboratory of Kerry Vahala, the Ted and Ginger Jenkins Professor of Information Science and Technology and Applied Physics at Caltech, have developed a method to stabilize microwave signals in the range of gigahertz, or billions of cycles per second—using a pair of laser beams as the reference, in lieu of a crystal.

Quartz crystals "tune" oscillators by vibrating at relatively low frequencies—those that fall at or below the range of megahertz, or millions of cycles per second, like radio waves. However, quartz crystals are so good at tuning these low frequencies that years ago, researchers were able to apply a technique called electrical frequency division that could convert higher-frequency microwave signals into lower-frequency signals, and then stabilize these with quartz. 

The new technique, which Vahala and his colleagues have dubbed electro-optical frequency division, builds off of the method of optical frequency division, developed at the National Institute of Standards and Technology more than a decade ago. "Our new method reverses the architecture used in standard crystal-stabilized microwave oscillators—the 'quartz' reference is replaced by optical signals much higher in frequency than the microwave signal to be stabilized," Vahala says.

Jiang Li—a Kavli Nanoscience Institute postdoctoral scholar at Caltech and one of two lead authors on the paper, along with graduate student Xu Yi—likens the method to a gear chain on a bicycle that translates pedaling motion from a small, fast-moving gear into the motion of a much larger wheel. "Electrical frequency dividers used widely in electronics can work at frequencies no higher than 50 to 100 GHz. Our new architecture is a hybrid electro-optical 'gear chain' that stabilizes a common microwave electrical oscillator with optical references at much higher frequencies in the range of terahertz or trillions of cycles per second," Li says.  

The optical reference used by the researchers is a laser that, to the naked eye, looks like a tiny disk. At only 6 mm in diameter, the device is very small, making it particularly useful in compact photonics devices—electronic-like devices powered by photons instead of electrons, says Scott Diddams, physicist and project leader at the National Institute of Standards and Technology and a coauthor on the study.

"There are always tradeoffs between the highest performance, the smallest size, and the best ease of integration. But even in this first demonstration, these optical oscillators have many advantages; they are on par with, and in some cases even better than, what is available with widespread electronic technology," Vahala says.

The new technique is described in a paper that will be published in the journal Science on July 18. Other authors on this paper include Hansuek Lee, who is a visiting associate at Caltech. The work was sponsored by the DARPA's ORCHID and PULSE programs; the Caltech Institute for Quantum Information and Matter (IQIM), an NSF Physics Frontiers Center with support of the Gordon and Betty Moore Foundation; and the Caltech Kavli NanoScience Institute.

Listing Title: 
Future Electronics May Depend on Lasers
Exclude from News Hub: 
News Type: 
Research News

Corals Provide Clues for Climate Change Research

Just as growth rings can offer insight into climate changes occurring during the lifespan of a tree, corals have much to tell about changes in the ocean. At Caltech, climate scientists Jess F. Adkins and Nivedita Thiagarajan use manned submersibles, like Alvin operated by the Woods Hole Oceanographic Institution, to dive thousands of meters below the surface to collect these specimens—and to shed new light on the connection between variance in carbon dioxide (CO2) levels in the deep ocean and historical glacial cycles.

A paper describing the research appears in the July 3 issue of Nature.

It has long been known that ice sheets wax and wane as the concentration of CO2 decreases and increases in the atmosphere. Adkins and his team believe that the deep ocean—which stores 60 times more inorganic sources of carbon than is found in the atmosphere—must play a vital role in this variance.

To investigate this, the researchers analyzed the calcium carbonate skeletons of corals collected from deep in the North Atlantic Ocean. The corals were built up from 11,000–18,000 years ago out of CO2 dissolved in the ocean.

"We used a new technique that has been developed at Caltech, called clumped isotope thermometry, to determine what the temperature of the ocean was in the location where the coral grew," says Thiagarajan, the Dreyfus Postdoctoral Scholar in Geochemistry at Caltech and lead author of the paper. "We also used radiocarbon dating and uranium-series dating to estimate the deep-ocean ventilation rate during this time period." 

The researchers found that the deep ocean started warming before the start of a rapid climate change event about 14,600 years ago in which the last glacial period—or most recent time period when ice sheets covered a large portion of Earth—was in the final stages of transitioning to the current interglacial period.

"We found that a warm-water-under-cold-water scenario developed around 800 years before the largest signal of warming in the Greenland ice cores, called the 'Bølling–Allerød,'" explains Adkins. "CO2 had already been rising in the atmosphere by this time, but we see the deep-ocean reorganization brought on by the potential energy release to be the pivot point for the system to switch from a glacial state, where the deep ocean can hold onto CO2, and an interglacial state, where it lets out CO2."  

"Studying Earth's climate in the past helps us understand how different parts of the climate system interact with each other," says Thiagarajan. "Figuring out these underlying mechanisms will help us predict how climate will change in the future." 

Additional authors on the Nature paper, "Abrupt pre-Bølling–Allerød warming and circulation changes in the deep ocean," are geochemist John M. Eiler and graduate student Adam V. Subhas from Caltech, and John R. Southon from UC Irvine. 

Katie Neith
Exclude from News Hub: 
News Type: 
Research News

Neuroeconomists Confirm Warren Buffett's Wisdom

Brain Research Suggests an Early Warning Signal Tips Off Smart Traders

Investment magnate Warren Buffett has famously suggested that investors should try to "be fearful when others are greedy and be greedy only when others are fearful."

That turns out to be excellent advice, according to the results of a new study by researchers at Caltech and Virginia Tech that looked at the brain activity and behavior of people trading in experimental markets where price bubbles formed. In such markets, where price far outpaces actual value, it appears that wise traders receive an early warning signal from their brains—a warning that makes them feel uncomfortable and urges them to sell, sell, sell.

"Seeing what's going on in people's brains when they are trading suggests that Buffett was right on target," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at Caltech.  

That is because in their experimental markets, Camerer and his colleagues found two distinct types of activity in the brains of participants—one that made a small fraction of participants nervous and prompted them to sell their experimental shares even as prices were on the rise, and another that was much more common and made traders behave in a greedy way, buying aggressively during the bubble and even after the peak. The lucky few who received the early warning signal got out of the market early, ultimately causing the bubble to burst, and earned the most money. The others displayed what former Federal Reserve chairman Alan Greenspan called "irrational exuberance" and lost their proverbial shirts.

A paper about the experiment and the team's findings appears this week in the journal Proceedings of the National Academy of Sciences. Alec Smith, the lead author on the paper, is a visiting associate at Caltech. Additional coauthors are from the Virginia Tech Carilion Research Institute.

The researchers set up a simple experimental market in which they were able to control the fundamental, or actual, value of a traded risky asset. In each of 16 sessions, about 20 participants were told how an on-screen trading market worked and were given 100 units of experimental currency and six shares of the risky asset. Then, over the course of 50 trading periods, the traders indicated by pressing keyboard buttons whether they wanted to buy, sell, or hold shares at various prices.  

Given the way the experiment was set up, the fundamental price of the risky asset was 14 currency units. Yet in many sessions, the traded price rose well above that—sometimes three to five times as high—creating bubble markets that eventually crashed.

During the experiment, two or three additional subjects per session also participated in the market while having their brains scanned by a functional magnetic resonance imaging (fMRI) machine. In fMRI, blood flow is monitored and used as a proxy for brain activation. If a brain region shows a relatively high level of blood oxygenation during a task, that region is thought to be particularly active.

At the end of the experiment, the researchers first sought to understand the behavioral data—the choices the participants made and the resulting market activity—before analyzing the fMRI scans.

"The first thing we saw was that even in an environment where you don't have squawking heads and all kinds of other information being fed to people, you can get bubbles just through pricing dynamics that occur naturally," says Camerer. This finding is at odds with what some economists have held—that bubbles are rare or are caused by misinformation or hype.

Next, the researchers divided the participants into three categories based on their earnings during their 50 trading periods—low, medium, and high earners. They found that the low earners tended to be momentum buyers who started buying as prices went up and then kept buying even as prices tanked. The middle-of-the-road folks didn't take many risks at all and, as a result, neither made nor lost the most money. And the traders who earned the most bought early and sold when prices were on the rise.

"The high-earning traders are the most interesting people to us," Camerer says. "Emotionally, they have to do something really hard: sell into a rising market. We thought that something must be going on in their brains that gives them an early warning signal."

To reveal what was actually occurring in the brains of the subjects—and the nature of that warning signal—Camerer and his colleagues analyzed the fMRI scans. Using this data, the researchers first looked for an area of the brain that was unusually active when the results screen came up that told participants their outcome for the last trading period. It turned out that a region called the nucleus accumbens (NAcc) lit up at that time in all participants, showing more activity when shares were bought or sold. The NAcc is associated with reward processing—it lights up when people are given expected rewards such as money or juice or a smile, for example. So it was not particularly surprising to see that the NAcc was activated when traders found out how their gambles paid off.

What was surprising, though, was that low earners were very sensitive to activity in the NAcc: when they experienced the most activity in the NAcc, they bought a lot of the risky asset. "That is a correlation we can call irrational exuberance," Camerer says. "Exuberance is the brain signal, and the irrational part is buying so many shares. The people who make the most money have low sensitivity to the same brain signal. Even though they're having the same mental reaction, they're not translating it into buying as aggressively."

Returning to the question of the high earners and their early warning signal, the researchers hypothesized that a part of the brain called the insular cortex, or insula, might be serving as that bellwether. The insula was a good candidate because previous studies had linked it to financial uncertainty and risk aversion. It is also known to reflect negative emotions associated with bodily sensations such as being shocked or smelling something disgusting, or even with feelings of social discomfort like those that come with being treated unfairly or being excluded.

Looking at the brain data of the high earners, the researchers found that insula activity did indeed increase shortly before the traders switched from buying to selling. And again, Camerer notes, "The prices were still going up at that time, so they couldn't be making pessimistic predictions just based on the recent price trend. We think this is a real warning signal."

Meanwhile, in the low earners, insula activity actually decreased, perhaps allowing their irrational exuberance to continue unchecked.  

Read Montague, director of the Human Neuroimaging Laboratory at the Virginia Tech Carilion Research Institute and one of the paper's senior authors, emphasizes the importance of group dynamics, or group thinking, in the study. "Individual human brains are indeed powerful alone, but in groups we know they can build bridges, spacecraft, microscopes, and even economic systems," he says. "This is one of the next frontiers in neuroscience—understanding the social mind."

Additional coauthors on the paper, "Irrational exuberance and neural warning signals during endogenous experimental market bubbles," include Terry Lohrenz and Justin King of Virginia Tech Carilion Research Institute in Roanoke, Virginia. Montague is also a professor at the Wellcome Trust Centre for Neuroimaging at University College London. The work was supported by the National Science Foundation, the Betty and Gordon Moore Foundation, and the Lipper Family Foundation.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News