Getting To Know Super-Earths

"If you have a coin and flip it just once, what does that tell you about the odds of heads versus tails?" asks Heather Knutson, assistant professor of planetary science at Caltech. "It tells you almost nothing. It's the same with planetary systems," she says.

For as long as astronomers have been looking to the skies, we have had just one planetary system—our own—to study in depth. That means we have only gotten to know a handful of possible outcomes of the planet formation process, and we cannot say much about whether the features observed in our solar system are common or rare when compared to planetary systems orbiting other stars.

That is beginning to change. NASA's Kepler spacecraft, which launched on a planet-hunting mission in 2009, searched one small patch of the sky and identified more than 4,000 candidate exoplanets—worlds orbiting stars other than our own sun. It was the first survey to provide a definitive look at the relative frequency of planets as a function of size. That is, to ask, 'How common are gas giant planets, like Jupiter, compared to planets that look a lot more like Earth?'

Kepler's results suggest that small planets are much more common than big ones. Interestingly, the most common planets are those that are just a bit larger than Earth but smaller than Neptune—the so-called super-Earths.

However, despite being common in our local corner of the galaxy, there are no examples of super-Earths in our own solar system. Our current observations tell us something about the sizes and orbits of these newly discovered worlds, but we have very little insight into their compositions.

"We are left with this situation where super-Earths appear to be the most common kind of exoplanet in the galaxy, but we don't know what they're made of," says Knutson.

There are a number of possibilities. A super-Earth could be just that: a bigger version of Earth—mostly rocky, with an atmosphere. Then again, it could be a mini-Neptune, with a large rock-ice core encapsulated in a thick envelope of hydrogen and helium. Or it could be a water world—a rocky core enveloped in a blanket of water and perhaps an atmosphere composed of steam (depending on the temperature of the planet).

"It's really interesting to think about these planets because they could have so many different compositions, and knowing their composition will tell us a lot about how planets form," Knutson says. For example, because planets in this size range acquire most of their mass by pulling in and incorporating solid material, water worlds initially must have formed far away from their parent stars, where temperatures were cold enough for water to freeze. Most of the super-Earths known today orbit very close to their host stars. If water-dominated super-Earths turn out to be common, it would indicate that most of these worlds did not form in their present locations but instead migrated in from more distant orbits.

In addition to thinking about exoplanets, Knutson and her students use space-based observatories like the Hubble and Spitzer Space Telescopes to learn more about the distant worlds. For example, the researchers analyze the starlight that filters through a planet's atmosphere as it passes in front of its star to learn about the composition of the atmosphere. Molecular species present in the planet's atmosphere absorb light at particular wavelengths. Therefore, by using Hubble and Spitzer to view the planet and its atmosphere at a number of different wavelengths, the researchers can determine which chemical compounds are present.

To date, nearly two dozen planets have been characterized with this technique. These observations have shown that the enormous gas giant exoplanets known as hot-Jupiters have water, carbon monoxide, hydrogen, helium—and potentially carbon dioxide and methane—in their atmospheres.

However, right now super-Earths are the hot topic. Unfortunately, although hundreds of super-Earths have been found, only a few are close enough and orbiting bright enough stars for astronomers to study in this way using currently available telescopes.

The first super-Earth that the astronomical community targeted for atmospheric studies was GJ 1214b, in the constellation Ophiuchus. Based on its average density (determined from its mass and radius), it was clear from the start that the planet was not entirely rocky. However, its density could be equally well matched by either a primarily water composition or a Neptune-like composition with a rocky core surrounded by a thick gas envelope. Information about the atmosphere could help astronomers determine which one it was: a mini-Neptune's atmosphere should contain lots of molecular hydrogen, while a water world's atmosphere should be water dominated.

GJ 1214b has been a popular target for the Hubble Space Telescope since its discovery in 2009. Disappointingly, after a first Hubble campaign led by researchers at the Harvard-Smithsonian Center for Astrophysics, the spectrum came back featureless—there were no chemical signatures in the atmosphere. After a second set of more sensitive observations led by researchers at the University of Chicago returned the same result, it became clear that a high cloud deck must be masking the signature of absorption from the planet's atmosphere.

"It's exciting to know that there are clouds on the planet, but the clouds are getting in the way of what we actually wanted to know, which is what is this super-Earth made of?" explains Knutson.

Now Knutson's team has studied a second super-Earth: HD 97658b, in the constellation Leo. They report their findings in the current issue of The Astrophysical Journal. The researchers used Hubble to measure the decrease in light when the planet passed in front of its parent star over a range of infrared wavelengths in order to detect small changes caused by water vapor in the planet's atmosphere.

However, again the data came back featureless. One explanation is that HD 97658b is also enveloped in clouds. However, Knutson says, it is also possible that the planet has an atmosphere that is lacking hydrogen. Because such an atmosphere could be very compact, it would make the telltale fingerprints of water vapor and other molecules very small and hard to detect. "Our data are not precise enough to tell whether it's clouds or the absence of hydrogen in the atmosphere that's causing the spectrum to be flat," she says. "This was just a quick first look to give us a rough idea of what the atmosphere looked like. Over the next year, we will use Hubble to observe this planet again in more detail. We hope those observations will provide a clear answer to the current mystery."

It appears that clouds are going to continue to pose a real challenge in studies of super-Earths, so Knutson and other researchers are working to understand the composition of the clouds around these planets and the conditions under which they form. The hope is that they will get to the point where they can predict which worlds will be shrouded in clouds. "If we can then target planets that we think should be cloud-free, that will help us make optimal use of Hubble's time," she says.

Looking to the future, Knutson says there is only one more known super-Earth that can be targeted for atmospheric studies with current telescopes. But new surveys, such as NASA's extended Kepler K2 mission and the Transiting Exoplanet Survey Satellite (TESS), slated for launch in 2017, should identify a large sample of new targets.

Of course, she says, astronomers would love to study exoplanets the size of Earth, but these worlds are just a bit too small and too difficult to observe with Hubble and Spitzer. NASA's James Webb Space Telescope, which is scheduled for launch in 2018, will provide the first opportunity to study more Earth-like worlds. "Super-Earths are at the edge of what we can study right now," Knutson says. "But super-Earths are a good consolation prize—they're interesting in their own right, and they give us a chance to explore new kinds of worlds with no analog in our own solar system."

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Rock-Dwelling Microbes Remove Methane from Deep Sea

Methane-breathing microbes that inhabit rocky mounds on the seafloor could be preventing large volumes of the potent greenhouse gas from entering the oceans and reaching the atmosphere, according to a new study by Caltech researchers.

The rock-dwelling microbes, which are detailed in the Oct. 14 issue of Nature Communications, represent a previously unrecognized biological sink for methane and as a result could reshape scientists' understanding of where this greenhouse gas is being consumed in subseafloor habitats, says Professor of Geobiology Victoria Orphan, who led the study.

"Methane is a much more powerful greenhouse gas than carbon dioxide, so tracing its flow through the environment is really a priority for climate models and for understanding the carbon cycle," Orphan says.

Orphan's team has been studying methane-breathing marine microorganisms for nearly 20 years. The microbes they focus on survive without oxygen, relying instead on sulfate ions present in seawater for their energy needs. Previous work by Orphan's team helped show that the methane-breathing system is actually made up of two different kinds of microorganisms that work closely with one another. One of the partners, dubbed "ANME" for "ANaerobic MEthanotrophs," belongs to a type of ancient single-celled creatures called the archaea.

Through a mechanism that is still unclear, ANME work closely with bacteria to consume methane using sulfate from seawater. "Without this biological process, much of that methane would enter the water column, and the escape rates into the atmosphere would probably be quite a bit higher," says study first author Jeffrey Marlow, a geobiology graduate student in Orphan's lab.

Until now, however, the activity of ANME and their bacterial partners had been primarily studied in sediments located in cold seeps, areas on the ocean bottom where methane is escaping from subseafloor sources into the water above. The new study marks the first time they have been observed to oxidize methane inside carbonate mounds, huge rocky outcroppings of calcium carbonate that can rise hundreds of feet above the seafloor.

If the microbes are living inside the mounds themselves, then the distribution of methane consumption is significantly different from what was previously thought. "Methane-derived carbonates represent a large volume within many seep systems, and finding active methane-consuming archaea and bacteria in the interior of these carbonate rocks extends the known habitat for methane-consuming microorganisms beyond the relatively thin layer of sediment that may overlay a carbonate mound," Marlow says.

Orphan and her team detected evidence of methane-breathing microbes in carbonate rocks collected from three cold seeps around the world: one at a tectonic plate boundary near Costa Rica; another in the Eel River basin off the coast of northwestern California; and at Hydrate Ridge, off the Oregon coast. The team used manned and robotic submersibles to collect the rock samples from depths ranging from 2,000 feet to nearly half a mile below the surface.

Marlow has vivid memories of being a passenger in the submersible Alvin during one of those rock-retrieval missions. "As you sink down, the water outside your window goes from bright blue surface water to darker turquoise and navy blue and all these shades of blue that you didn't know existed until it gets completely dark," Marlow recalls. "And then you start seeing flashes of light because the vehicle is perturbing the water column and exciting florescent organisms. When you finally get to the seafloor, Alvin's exterior lights turn on, and this crazy alien world is illuminated in front of you."

The carbonate mounds that the subs visited often serve as foundations for coral and sponges, and are home to rockfishes, clams, crabs, and other aquatic life. For their study, the team members gathered rock samples not only from carbonate mounds located within active cold seeps, where methane could be seen escaping from the seafloor into the water, but also from mounds that appeared to be dormant.

Once the carbonate rocks were collected, they were transported back to the surface and rushed into a cold room aboard a research ship. In the cold room, which was maintained at the temperature of the deep sea, the team cracked open the carbonates in order to gather material from their interiors. "We wanted to make sure we weren't just sampling material from the surface of the rocks," Marlow says.

Using a microscope, the team confirmed that ANME and sulfate-reducing bacterial cells were indeed present inside the carbonate rocks, and genetic analysis of their DNA showed that they were related to methanotrophs that had previously been characterized in seafloor sediment. The scientists also used a technique that involved radiolabeled 14C-methane tracer gas to quantify the rates of methane consumption in the carbonate rocks and sediments from both the actively seeping sites and the areas appearing to be inactive. They found that the rock-dwelling methanotrophs consumed methane at a slower rate than their sediment-dwelling cousins.

"The carbonate-based microbes breathed methane at roughly one-third the rate of those gathered from sediments near active seep sites," Marlow says. "However, because there are likely many more microbes living in carbonate mounds than in sediments, their contributions to methane removal from the environment may be more significant."

The rock samples that were harvested near supposedly dormant cold seeps also harbored microbial communities capable of consuming methane. "We were surprised to find that these marine microorganisms are still viable and, if exposed to methane, can continue to oxidize this greenhouse gas long after surface expressions of seepage have vanished." Orphan says.

Along with Orphan and Marlow, additional coauthors on the paper, "Carbonate-hosted methanotrophy represents an unrecognized methane sink in the deep sea," include former Caltech associate research scientist Joshua Steele, now at the Southern California Coastal Water Research Project; Wiebke Ziebis, an associate professor at the University of Southern California; Andrew Thurber, an assistant professor at Oregon State University; and Lisa Levin, a professor at the Scripps Institution of Oceanography. Funding for the study was provided by the National Science Foundation; NASA's Astrobiology Institute; the Gordon and Betty Moore Foundation Marine Microbiology Initiative grant; and the National Research Council of the National Academies. 

Written by Ker Than

Writer: 
Ker Than
Exclude from News Hub: 
No

NuSTAR Discovers Impossibly Bright Dead Star

X-ray source in the Cigar Galaxy is the first ultraluminous pulsar ever detected

Astronomers working with NASA's Nuclear Spectroscopic Telescope Array (NuSTAR), led by Caltech's Fiona Harrison, have found a pulsating dead star beaming with the energy of about 10 million suns. The object, previously thought to be a black hole because it is so powerful, is in fact a pulsar—the incredibly dense rotating remains of a star.

"This compact little stellar remnant is a real powerhouse. We've never seen anything quite like it," says Harrison, NuSTAR's principal investigator and the Benjamin M. Rosen Professor of Physics at Caltech. "We all thought an object with that much energy had to be a black hole."

Dom Walton, a postdoctoral scholar at Caltech who works with NuSTAR data, says that with its extreme energy, this pulsar takes the top prize in the weirdness category. Pulsars are typically between one and two times the mass of the sun. This new pulsar presumably falls in that same range but shines about 100 times brighter than theory suggests something of its mass should be able to.

"We've never seen a pulsar even close to being this bright," Walton says. "Honestly, we don't know how this happens, and theorists will be chewing on it for a long time." Besides being weird, the finding will help scientists better understand a class of very bright X-ray sources, called ultraluminous X-ray sources (ULXs).

Harrison, Walton, and their colleagues describe NuSTAR's detection of this first ultraluminous pulsar in a paper that appears in the current issue of Nature.

"This was certainly an unexpected discovery," says Harrison. "In fact, we were looking for something else entirely when we found this."

Earlier this year, astronomers in London detected a spectacular, once-in-a-century supernova (dubbed SN2014J) in a relatively nearby galaxy known as Messier 82 (M82), or the Cigar Galaxy, 12 million light-years away. Because of the rarity of that event, telescopes around the world and in space adjusted their gaze to study the aftermath of the explosion in detail.


This animation shows a neutron star—the core of a star that exploded in a massive supernova. This particular neutron star is known as a pulsar because it sends out rotating beams of X-rays that sweep past Earth like lighthouse beacons. (Credit: NASA/JPL-Caltech)

Besides the supernova, M82 harbors a number of other ULXs. When Matteo Bachetti of the Université de Toulouse in France, the lead author of this new paper, took a closer look at these ULXs in NuSTAR's data, he discovered that something in the galaxy was pulsing, or flashing light.

"That was a big surprise," Harrison says. "For decades everybody has thought these ultraluminous X-ray sources had to be black holes. But black holes don't have a way to create this pulsing."

But pulsars do. They are like giant magnets that emit radiation from their magnetic poles. As they rotate, an outside observer with an X-ray telescope, situated at the right angle, would see flashes of powerful light as the beam swept periodically across the observer's field of view, like a lighthouse beacon.

The reason most astronomers had assumed black holes were powering ULXs is that these X-ray sources are so incredibly bright. Black holes can be anywhere from 10 to billions of times the mass of the sun, making their gravitational tug much stronger than that of a pulsar. As matter falls onto the black hole the gravitational energy turns it to heat, which creates X-ray light. The bigger the black hole, the more energy there is to make the object shine.

Surprised to see the flashes coming from M82, the NuSTAR team checked and rechecked the data. The flashes were really there, with a pulse showing up every 1.37 seconds.

The next step was to figure out which X-ray source was producing the flashes. Walton and several other Caltech researchers analyzed the data from NuSTAR and a second NASA X-ray telescope, Chandra, to rule out about 25 different X-ray sources, finally settling on a ULX known as M82X-2 as the source of the flashes.

With the pulsar and its location within M82 identified, there are still many questions left to answer. It is many times higher than the Eddington limit, a basic physics guideline that sets an upper limit on the brightness that an object of a given mass should be able to achieve.

"This is the most extreme violation of that limit that we've ever seen," says Walton. "We have known that things can go above that by a small amount, but this blows that limit away."

NuSTAR is particularly well-suited to make discoveries like this one. Not only does the space telescope see high-energy X-rays, but it sees them in a unique way. Rather than snapping images the way that your cell-phone camera does—by integrating the light such that images blur if you move—NuSTAR detects individual particles of X-ray light and marks when they are measured. That allows the team to do timing analyses and, in this case, to see that the light from the ULX was coming in pulses.

Now that the NuSTAR team has shown that this ULX is a pulsar, Harrison points out that many other known ULXs may in fact be pulsars as well. "Everybody had assumed all of these sources were black holes," she says. "Now I think people have to go back to the drawing board and decide whether that's really true. This could just be a very unique, strange object, or it could be that they're not that uncommon. We just don't know. We need more observations to see if other ULXs are pulsing."

Along with Harrison and Walton, additional Caltech authors on the paper, "An Ultraluminous X-ray Source Powered by An Accreting Neutron Star," are postdoctoral scholars Felix Fürst, and Shriharsh Tendulkar; research scientists Brian W. Grefenstette and Vikram Rana; and Shri Kulkarni, the John D. and Catherine T. MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories. The work was supported by NASA and made use of data supplied by the UK Swift Science Data Centre at the University of Leicester.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Swimming Sea-Monkeys Reveal How Zooplankton May Help Drive Ocean Circulation

Brine shrimp, which are sold as pets known as Sea-Monkeys, are tiny—only about half an inch long each. With about 10 small leaf-like fins that flap about, they look as if they could hardly make waves.

But get billions of similarly tiny organisms together and they can move oceans.

It turns out that the collective swimming motion of Sea-Monkeys and other zooplankton—swimming plankton—can generate enough swirling flow to potentially influence the circulation of water in oceans, according to a new study by Caltech researchers.

The effect could be as strong as those due to the wind and tides, the main factors that are known to drive the up-and-down mixing of oceans, says John Dabiri, professor of aeronautics and bioengineering at Caltech. According to the new analysis by Dabiri and mechanical engineering graduate student Monica Wilhelmus, organisms like brine shrimp, despite their diminutive size, may play a significant role in stirring up nutrients, heat, and salt in the sea—major components of the ocean system.

In 2009, Dabiri's research team studied jellyfish to show that small animals can generate flow in the surrounding water. "Now," Dabiri says, "these new lab experiments show that similar effects can occur in organisms that are much smaller but also more numerous—and therefore potentially more impactful in regions of the ocean important for climate."

The researchers describe their findings in the journal Physics of Fluids.

Brine shrimp (specifically Artemia salina) can be found in toy stores, as part of kits that allow you to raise a colony at home. But in nature, they live in bodies of salty water, such as the Great Salt Lake in Utah. Their behavior is cued by light: at night, they swim toward the surface to munch on photosynthesizing algae while avoiding predators. During the day, they sink back into the dark depths of the water.


A. salina (a species of brine shrimp, commonly known as Sea-Monkeys) begin a vertical migration, stimulated by a vertical blue laser light.

To study this behavior in the laboratory, Dabiri and Wilhelmus use a combination of blue and green lasers to induce the shrimp to migrate upward inside a big tank of water. The green laser at the top of the tank provides a bright target for the shrimp to swim toward while a blue laser rising along the side of the tank lights up a path to guide them upward.

The tank water is filled with tiny, silver-coated hollow glass spheres 13 microns wide (about one-half of one-thousandth of an inch). By tracking the motion of those spheres with a high-speed camera and a red laser that is invisible to the organisms, the researchers can measure how the shrimp's swimming causes the surrounding water to swirl.

Although researchers had proposed the idea that swimming zooplankton can influence ocean circulation, the effect had never been directly observed, Dabiri says. Past studies could only analyze how individual organisms disturb the water surrounding them.

But thanks to this new laser-guided setup, Dabiri and Wilhelmus have been able to determine that the collective motion of the shrimp creates powerful swirls—stronger than would be produced by simply adding up the effects produced by individual organisms.

Adding up the effect of all of the zooplankton in the ocean—assuming they have a similar influence—could inject as much as a trillion watts of power into the oceans to drive global circulation, Dabiri says. In comparison, the winds and tides contribute a combined two trillion watts.

Using this new experimental setup will enable future studies to better untangle the complex relationships between swimming organisms and ocean currents, Dabiri says. "Coaxing Sea-Monkeys to swim when and where you want them to is even more difficult than it sounds," he adds. "But Monica was undeterred over the course of this project and found a creative solution to a very challenging problem."

The title of the Physics of Fluids paper is "Observations of large-scale fluid transport by laser-guided plankton aggregations." The research was supported by the U.S.-Israel Binational Science Foundation, the Office of Naval Research, and the National Science Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Variability Keeps The Body In Balance

Although the heart beats out a very familiar "lub-dub" pattern that speeds up or slows down as our activity increases or decreases, the pattern itself isn't as regular as you might think. In fact, the amount of time between heartbeats can vary even at a "constant" heart rate—and that variability, doctors have found, is a good thing.

Reduced heart rate variability (HRV) has been found to be predictive of a number of illnesses, such as congestive heart failure and inflammation. For athletes, a drop in HRV has also been linked to fatigue and overtraining. However, the underlying physiological mechanisms that control HRV—and exactly why this variation is important for good health—are still a bit of a mystery.

By combining heart rate data from real athletes with a branch of mathematics called control theory, a collaborative team of physicians and Caltech researchers from the Division of Engineering and Applied Science have now devised a way to better understand the relationship between HRV and health—a step that could soon inform better monitoring technologies for athletes and medical professionals.

The work was published in the August 19 print issue of the Proceedings of the National Academy of Sciences.

To run smoothly, complex systems, such as computer networks, cars, and even the human body, rely upon give-and-take connections and relationships among a large number of variables; if one variable must remain stable to maintain a healthy system, another variable must be able to flex to maintain that stability. Because it would be too difficult to map each individual variable, the mathematics and software tools used in control theory allow engineers to summarize the ups and downs in a system and pinpoint the source of a possible problem.

Researchers who study control theory are increasingly discovering that these concepts can also be extremely useful in studies of the human body. In order for a body to work optimally, it must operate in an environment of stability called homeostasis. When the body experiences stress—for example, from exercise or extreme temperatures—it can maintain a stable blood pressure and constant body temperature in part by dialing the heart rate up or down. And HRV plays an important role in maintaining this balance, says study author John Doyle, the Jean-Lou Chameau Professor of Control and Dynamical Systems, Electrical Engineering, and Bioengineering.

"A familiar related problem is in driving," Doyle says. "To get to a destination despite varying weather and traffic conditions, any driver—even a robotic one—will change factors such as acceleration, braking, steering, and wipers. If these factors suddenly became frozen and unchangeable while the car was still moving, it would be a nearly certain predictor that a crash was imminent. Similarly, loss of heart rate variability predicts some kind of malfunction or 'crash,' often before there are any other indications," he says.

To study how HRV helps maintain this version of "cruise control" in the human body, Doyle and his colleagues measured the heart rate, respiration rate, oxygen consumption, and carbon dioxide generation of five healthy young athletes as they completed experimental exercise routines on stationary bicycles.

By combining the data from these experiments with standard models of the physiological control mechanisms in the human body, the researchers were able to determine the essential tradeoffs that are necessary for athletes to produce enough power to maintain an exercise workload while also maintaining the internal homeostasis of their vital signs.

"For example, the heart, lungs, and circulation must deliver sufficient oxygenated blood to the muscles and other organs while not raising blood pressure so much as to damage the brain," Doyle says. "This is done in concert with control of blood vessel dilation in the muscles and brain, and control of breathing. As the physical demands of the exercise change, the muscles must produce fluctuating power outputs, and the heart, blood vessels, and lungs must then respond to keep blood pressure and oxygenation within narrow ranges."

Once these trade-offs were defined, the researchers then used control theory to analyze the exercise data and found that a healthy heart must maintain certain patterns of variability during exercise to keep this complicated system in balance.  Loss of this variability is a precursor of fatigue, the stress induced by exercise. Today, some HRV monitors in the clinic can let a doctor know when variability is high or low, but they provide little in the way of an actionable diagnosis.

Because monitors in hospitals can already provide HRV levels and dozens of other signals and readings, the integration of such mathematical analyses of control theory into HRV monitors could, in the future, provide a way to link a drop in HRV to a more specific and treatable diagnosis. In fact, one of Doyle's students has used an HRV application of control theory to better interpret traditional EKG signals.

Control theory could also be incorporated into the HRV monitors used by athletes to prevent fatigue and injury from overtraining, he says.

"Physicians who work in very data-intensive settings like the operating room or ICU are in urgent need of ways to rapidly and acutely interpret the data deluge," says Marie Csete, MD (PhD, '00), chief scientific officer at the Huntington Medical Research Institutes and a coauthor on the paper. "We hope this work is a first step in a larger research program that helps physicians make better use of data to care for patients."

This study is not the first to apply control theory in medicine. Control theory has already informed the design of a wearable artificial pancreas for type 1 diabetic patients and an automated prototype device that controls the administration of anesthetics during surgery. Nor will it be the last, says Doyle, whose sights are next set on using control theory to understand the progression of cancer.

"We have a new approach, similarly based on control of networks, that organizes and integrates a bunch of new ideas floating around about the role of healthy stroma—non-tumor cells present in tumors—in promoting cancer progression," he says.

"Based on discussions with Dr. Peter Lee at City of Hope [a cancer research and treatment center], we now understand that the non-tumor cells interact with the immune system and with chemotherapeutic drugs to modulate disease progression," Doyle says. "And I'm hoping there's a similar story there, where thinking rigorously about the tradeoffs in development, regeneration, inflammation, wound healing, and cancer will lead to new insights and ultimately new therapies."

Other Caltech coauthors on the study include former graduate students Na Li (PhD '13) now an assistant professor at Harvard; Somayeh Sojoudi (PhD '12), currently at NYU; and graduate students Chenghao Simon Chien and Jerry Cruz. Other collaborators on the study were Benjamin Recht, a former postdoctoral scholar in Doyle's lab and now an assistant professor at UC Berkeley; Daniel Bahmiller, a clinician training in public health; and David Stone, MD, an expert in ICU medicine from the University of Virginia School of Medicine.

Writer: 
Exclude from News Hub: 
No

A New Way to Prevent the Spread of Devastating Diseases

For decades, researchers have tried to develop broadly effective vaccines to prevent the spread of illnesses such as HIV, malaria, and tuberculosis. While limited progress has been made along these lines, there are still no licensed vaccinations available that can protect most people from these devastating diseases.

So what are immunologists to do when vaccines just aren't working?

At Caltech, Nobel Laureate David Baltimore and his colleagues have approached the problem in a different way. Whereas vaccines introduce substances such as antigens into the body hoping to illicit an appropriate immune response—the generation of either antibodies that might block an infection or T cells capable of attacking infected cells—the Caltech team thought: Why not provide the body with step-by-step instructions for producing specific antibodies that have been shown to neutralize a particular disease?

The method they developed—originally to trigger an immune response to HIV—is called vectored immunoprophylaxis, or VIP. The technique was so successful that it has since been applied to a number of other infectious diseases, including influenza, malaria, and hepatitis C.

"It is enormously gratifying to us that this technique can have potentially widespread use for the most difficult diseases that are faced particularly by the less developed world," says Baltimore, president emeritus and the Robert Andrews Millikan Professor of Biology at Caltech.

VIP relies on the prior identification of one or more antibodies that are able to prevent infection in laboratory tests by a wide range of isolated samples of a particular pathogen. Once that has been done, researchers can incorporate the genes that encode those antibodies into an adeno-associated virus (AAV), a small, harmless virus that has been useful in gene-therapy trials. When the AAV is injected into muscle tissue, the genes instruct the muscle tissue to generate the specified antibodies, which can then enter the circulation and protect against infection.

In 2011, the Baltimore group reported in Nature that they had used the technique to deliver antibodies that effectively protected mice from HIV infection. Alejandro Balazs was lead author on that paper and was a postdoctoral scholar in the Baltimore lab at the time.

"We expected that at some dose, the antibodies would fail to protect the mice, but it never did—even when we gave mice 100 times more HIV than would be needed to infect seven out of eight mice," said Balazs, now at the Ragon Institute of MGH, MIT and Harvard. "All of the exposures in this work were significantly larger than a human being would be likely to encounter."

At the time, the researchers noted that the leap from mice to humans is large but said they were encouraged by the high levels of antibodies the mice were able to produce after a single injection and how effectively the mice were protected from HIV infection for months on end. Baltimore's team is now working with a manufacturer to produce the materials needed for human clinical trials that will be conducted by the Vaccine Research Center at the National Institutes of Health.

Moving on from HIV, the Baltimore lab's next goal was protection against influenza A. Although reasonably effective influenza vaccines exist, each year more than 20,000 deaths, on average, are the result of seasonal flu epidemics in the United States. We are encouraged to get flu shots every fall because the influenza virus is something of a moving target—it evolves to avoid resistance. There are also many different strains of influenza A (e.g. H1N1 and H3N2), each incorporating a different combination of the various forms of the proteins hemagglutinin (H) and neuraminidase (N). To chase this target, the vaccine is reformulated each year, but sometimes it fails to prevent the spread of the strains that are prevalent that year.

But about five years ago, researchers began identifying a new class of anti-influenza antibodies that are able to prevent infection by many, many strains of the virus. Instead of binding to the head of the influenza virus, as most flu-fighting antibodies do, these new antibodies target the stalk that holds up the head. And while the head is highly adaptable—meaning that even when mutations occur there, the virus can often remain functional—the stalk must basically remain the same in order for the virus to survive. So these stalk antibodies are very hard for the virus to mutate against.

In 2013, the Baltimore group stitched the genes for two of these new antibodies into an AAV and showed that mice injected with the vector were protected against multiple flu strains, including all H1, H2, and H5 influenza strains tested. This was even true of older mice and those without a properly functioning immune system—a particularly important finding considering that most deaths from the flu occur in the elderly and immunocompromised populations. The group reported its results in the journal Nature Biotechnology.

"We have shown that we can protect mice completely against flu using a kind of antibody that doesn't need to be changed every year," says Baltimore. "It is important to note that this has not been tested in humans, so we do not yet know what concentration of antibody can be produced by VIP in humans. However, if it works as well as it does in mice, VIP may provide a plausible approach to protect even the most vulnerable patients against epidemic and pandemic influenza."

Now that the Baltimore lab has shown VIP to be so effective, other groups from around the country have adopted the Caltech-developed technique to try to ward off malaria, hepatitis C, and tuberculosis.

In August, a team led by Johns Hopkins Bloomberg School of Public Health reported in the Proceedings of the National Academy of Sciences (PNAS) that as many as 70 percent of mice that they had injected by the VIP procedure were protected from infection with malaria by Plasmodium falciparum, the parasite that carries the most lethal of the four types of the disease. A subset of mice in the study produced particularly high levels of the disease-fighting antibodies. In those mice, the immunization was 100 percent effective.

"This is also just a first-generation antibody," says Baltimore, who was a coauthor on the PNAS study. "Knowing now that you can get this kind of protection, it's worth trying to get much better antibodies, and I trust that people in the malaria field will do that."

Most recently, a group led by researchers from The Rockefeller University showed that three hepatitis-C-fighting antibodies delivered using VIP were able to protect mice efficiently from the virus. The results were published in the September 17 issue of the journal Science Translational Medicine. The researchers also found that the treatment was able to temporarily clear the virus from mice that had already been infected. Additional work is needed to determine how to prevent the disease from relapsing. Interestingly, though, the work suggests that the antibodies that are effective against hepatitis C, once it has taken root in the liver, may work by protecting uninfected liver cells from infection while allowing already infected cells to be cleared from the body.    

An additional project is currently evaluating the use of VIP for the prevention of tuberculosis—a particular challenge given the lack of proven tuberculosis-neutralizing antibodies.

"When we started this work, we imagined that it might be possible to use VIP to fight other diseases, so it has been very exciting to see other groups adopting the technique for that purpose," Baltimore says. "If we can get positive clinical results in humans with HIV, we think that would really encourage people to think about using VIP for these other diseases."

Baltimore's work is supported by funding from the National Institute of Allergy and Infectious Disease, the Bill and Melinda Gates Foundation, the Caltech-UCLA Joint Center for Translational Medicine, and a Caltech Translational Innovation Partnership Award.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Sensing Neuronal Activity With Light

For years, neuroscientists have been trying to develop tools that would allow them to clearly view the brain's circuitry in action—from the first moment a neuron fires to the resulting behavior in a whole organism. To get this complete picture, neuroscientists are working to develop a range of new tools to study the brain. Researchers at Caltech have developed one such tool that provides a new way of mapping neural networks in a living organism.

The work—a collaboration between Viviana Gradinaru (BS '05), assistant professor of biology and biological engineering, and Frances Arnold, the Dick and Barbara Dickinson Professor of Chemical Engineering, Bioengineering and Biochemistry—was described in two separate papers published this month.

When a neuron is at rest, channels and pumps in the cell membrane maintain a cell-specific balance of positively and negatively charged ions within and outside of the cell resulting in a steady membrane voltage called the cell's resting potential. However, if a stimulus is detected—for example, a scent or a sound—ions flood through newly open channels causing a change in membrane voltage. This voltage change is often manifested as an action potential—the neuronal impulse that sets circuit activity into motion.

The tool developed by Gradinaru and Arnold detects and serves as a marker of these voltage changes.

"Our overarching goal for this tool was to achieve sensing of neuronal activity with light rather than traditional electrophysiology, but this goal had a few prerequisites," Gradinaru says. "The sensor had to be fast, since action potentials happen in just milliseconds. Also, the sensor had to be very bright so that the signal could be detected with existing microscopy setups. And you need to be able to simultaneously study the multiple neurons that make up a neural network."

The researchers began by optimizing Archaerhodopsin (Arch), a light-sensitive protein from bacteria. In nature, opsins like Arch detect sunlight and initiate the microbes' movement toward the light so that they can begin photosynthesis. However, researchers can also exploit the light-responsive qualities of opsins for a neuroscience method called optogenetics—in which an organism's neurons are genetically modified to express these microbial opsins. Then, by simply shining a light on the modified neurons, the researchers can control the activity of the cells as well as their associated behaviors in the organism.

Gradinaru had previously engineered Arch for better tolerance and performance in mammalian cells as a traditional optogenetic tool used to control an organism's behavior with light. When the modified neurons are exposed to green light, Arch acts as an inhibitor, controlling neuronal activity—and thus the associated behaviors—by preventing the neurons from firing.

However, Gradinaru and Arnold were most interested in another property of Arch: when exposed to red light, the protein acts as a voltage sensor, responding to changes in membrane voltages by producing a flash of light in the presence of an action potential. Although this property could in principle allow Arch to detect the activity of networks of neurons, the light signal marking this neuronal activity was often too dim to see.

To fix this problem, Arnold and her colleagues made the Arch protein brighter using a method called directed evolution—a technique Arnold originally pioneered in the early 1990s. The researchers introduced mutations into the Arch gene, thus encoding millions of variants of the protein. They transferred the mutated genes into E. coli cells, which produced the mutant proteins encoded by the genes. They then screened thousands of the resulting E. coli colonies for the intensities of their fluorescence. The genes for the brightest versions were isolated and subjected to further rounds of mutagenesis and screening until the bacteria produced proteins that were 20 times brighter than the original Arch protein.

A paper describing the process and the bright new protein variants that were created was published in the September 9 issue of the Proceedings of the National Academy of Science.

"This experiment demonstrates how rapidly these remarkable bacterial proteins can evolve in response to new demands. But even more exciting is what they can do in neurons, as Viviana discovered," says Arnold.

In a separate study led by Gradinaru's graduate students Nicholas Flytzanis and Claire Bedbrook, who is also advised by Arnold, the researchers genetically incorporated the new, brighter Arch variants into rodent neurons in culture to see which of these versions was most sensitive to voltage changes—and therefore would be the best at detecting action potentials. One variant, Archer1, was not only bright and sensitive enough to mark action potentials in mammalian neurons in real time, it could also be used to identify which neurons were synaptically connected—and communicating with one another—in a circuit.

The work is described in a study published on September 15 in the journal Nature Communications.

"What was interesting is that we would see two cells over here light up, but not this one over there—because the first two are synaptically connected," Gradinaru says. "This tool gave us a way to observe a network where the perturbation of one cell affects another."

However, sensing activity in a living organism and correlating this activity with behavior remained the biggest challenge. To accomplish this goal Gradinaru's team worked with Paul Sternberg, the Thomas Hunt Morgan Professor of Biology, to test Archer1 as a sensor in a living organism—the tiny nematode worm C. elegans. "There are a few reasons why we used the worms here: they are powerful organisms for quick genetic engineering and their tissues are nearly transparent, making it easy to see the fluorescent protein in a living animal," she says.

After incorporating Archer1 into neurons that were a part of the worm's olfactory system—a primary source of sensory information for C. elegans—the researchers exposed the worm to an odorant. When the odorant was present, a baseline fluorescent signal was seen, and when the odorant was removed, the researchers could see the circuit of neurons light up, meaning that these particular neurons are repressed in the presence of the stimulus and active in the absence of the stimulus. The experiment was the first time that an Arch variant had been used to observe an active circuit in a living organism.

Gradinaru next hopes to use tools like Archer1 to better understand the complex neuronal networks of mammals, using microbial opsins as sensing and actuating tools in optogenetically modified rodents.

"For the future work it's useful that this tool is bifunctional. Although Archer1 acts as a voltage sensor under red light, with green light, it's an inhibitor," she says. "And so now a long-term goal for our optogenetics experiments is to combine the tools with behavior-controlling properties and the tools with voltage-sensing properties. This would allow us to obtain all-optical access to neuronal circuits. But I think there is still a lot of work ahead."

One goal for the future, Gradinaru says, is to make Archer1 even brighter. Although the protein's fluorescence can be seen through the nearly transparent tissues of the nematode worm, opaque organs such as the mammalian brain are still a challenge. More work, she says, will need to be done before Archer1 could be used to detect voltage changes in the neurons of living, behaving mammals.

And that will require further collaborations with protein engineers and biochemists like Arnold.

"As neuroscientists we often encounter experimental barriers, which open the potential for new methods. We then collaborate to generate tools through chemistry or instrumentation, then we validate them and suggest optimizations, and it just keeps going," she says. "There are a few things that we'd like to be better, and through these many iterations and hard work it can happen."

The work published in both papers was supported with grants from the National Institutes of Health (NIH), including an NIH/National Institute of Neurological Disorders and Stroke New Innovator Award to Gradinaru; Beckman Institute funding for the BIONIC center; grants from the U.S. Army Research Office as well as a Caltech Biology Division Training Grant and startup funds from Caltech's President and Provost, and the Division of Biology and Biological Engineering; and other financial support from the Shurl and Kay Curci Foundation and the Life Sciences Research Foundation.

Writer: 
Exclude from News Hub: 
No

Slimy Fish and the Origins of Brain Development

Lamprey—slimy, eel-like parasitic fish with tooth-riddled, jawless sucking mouths—are rather disgusting to look at, but thanks to their important position on the vertebrate family tree, they can offer important insights about the evolutionary history of our own brain development, a recent study suggests.

The work appears in a paper in the September 14 advance online issue of the journal Nature.

"Lamprey are one of the most primitive vertebrates alive on Earth today, and by closely studying their genes and developmental characteristics, researchers can learn more about the evolutionary origins of modern vertebrates—like jawed fishes, frogs, and even humans," says paper coauthor Marianne Bronner, the Albert Billings Ruddock Professor of Biology and director of Caltech's unique Zebrafish/Xenopus/Lamprey facility, where the study was done.

The facility is one of the few places in the world where lampreys can be studied in captivity. Although the parasitic lamprey are an invasive pest in the Great Lakes, they are difficult to study under controlled conditions; their lifecycle takes up to 10 years and they only spawn for a few short weeks in the summer before they die.

Each summer, Bronner and her colleagues receive shipments of wild lamprey from Michigan just before the prime of breeding season. When the lamprey arrive, they are placed in tanks where the temperature of the water is adjusted to extend the breeding season from around three weeks to up to two months. In those extra weeks, the lamprey produce tens of thousands of additional eggs and sperm, which, via in vitro fertilization, generate tens of thousands of additional embryos for study. During this time, scientists from all over the world come to Caltech to perform experiments with the developing lamprey embryos.

In the current study, Bronner and her collaborators—who traveled to Caltech from Stower's Institute for Medical Research in Kansas City, Missouri—studied the origins of the vertebrate hindbrain.

The hindbrain is a part of the central nervous system common to chordates—or organisms that have a nerve cord like our spinal cord. During the development of vertebrates—a subtype of chordates that have backbones—the hindbrain is compartmentalized into eight segments, each of which becomes uniquely patterned to establish networks of neuronal circuits. These segments eventually give rise to adult brain regions like the cerebellum, which is important for motor control, and the medulla oblongata, which is necessary for breathing and other involuntary functions.

However, this segmentation is not present in so-called "invertebrate chordates"—a grouping of chordates that lack a backbone, such as sea squirts and lancelets.

"The interesting thing about lampreys is that they occupy an intermediate evolutionary position between the invertebrate chordates and the jawed vertebrates," says Hugo Parker, a postdoc at Stower's Institute and first author on the study. "By investigating aspects of lamprey embryology, we can get a picture of how vertebrate traits might have evolved."

In the vertebrates, segmental patterning genes called Hox genes help to determine the animal's head-to-tail body plan—and those same Hox genes also control the segmentation of the hindbrain. Although invertebrate chordates also have Hox genes, these animals don't have segmented hindbrains. Because lampreys are centered between these two types of organisms on the evolutionary tree, the researchers wanted to know whether or not Hox genes are involved in patterning of the lamprey hindbrain.

To their surprise, the researchers discovered that the lamprey hindbrain was not only segmented during development but the process also involved Hox genes—just like in its jawed vertebrate cousins.

"When we started, we thought that the situation was different, and the Hox genes were not really integrated into the process of segmentation as they are in jawed vertebrates," Parker says. "But in actually doing this project, we discovered the way that lamprey Hox genes are expressed and regulated is very similar to what we see in jawed vertebrates." This means that hindbrain segmentation—and the role of Hox genes in this segmentation—happened earlier on in evolution than was once thought, he says.

Parker, who has been spending his summers at Caltech studying lampreys since 2008, is next hoping to pinpoint other aspects of the lamprey hindbrain that may be conserved in modern vertebrates—information that will help contribute to a fundamental understanding of vertebrate development. And although those investigations will probably mean following the lamprey for a few more summers at Caltech, Parker says his time in the lamprey facility continually offers a one-of-a-kind experience.

"The lamprey system here is unique in the world—and it's not just the water tanks and how we've learned to maintain the animals. It's the small nucleus of people who have particular skills, people who come in from all over the world to work together, share protocols, and develop the field together," he says. "That's one of the things I've liked ever since I first came here. I really felt like I was a part of something very special.

These results were published in a paper titled "A Hox regulatory network of hindbrain segmentation is conserved to the base of vertebrates." Robb Krumlauf, a scientific director at the Stower's Institute and professor at the Kansas University Medical Center, was also a coauthor on the study. The Zebrafish/Xenopus/Lamprey facility at Caltech is a Beckman Institute facility.

Writer: 
Exclude from News Hub: 
No

Ceramics Don't Have To Be Brittle

Caltech Materials Scientists Are Creating Materials By Design

Imagine a balloon that could float without using any lighter-than-air gas. Instead, it could simply have all of its air sucked out while maintaining its filled shape. Such a vacuum balloon, which could help ease the world's current shortage of helium, can only be made if a new material existed that was strong enough to sustain the pressure generated by forcing out all that air while still being lightweight and flexible.

Caltech materials scientist Julia Greer and her colleagues are on the path to developing such a material and many others that possess unheard-of combinations of properties. For example, they might create a material that is thermally insulating but also extremely lightweight, or one that is simultaneously strong, lightweight, and nonbreakable—properties that are generally thought to be mutually exclusive.

Greer's team has developed a method for constructing new structural materials by taking advantage of the unusual properties that solids can have at the nanometer scale, where features are measured in billionths of meters. In a paper published in the September 12 issue of the journal Science, the Caltech researchers explain how they used the method to produce a ceramic (e.g., a piece of chalk or a brick) that contains about 99.9 percent air yet is incredibly strong, and that can recover its original shape after being smashed by more than 50 percent.

"Ceramics have always been thought to be heavy and brittle," says Greer, a professor of materials science and mechanics in the Division of Engineering and Applied Science at Caltech. "We're showing that in fact, they don't have to be either. This very clearly demonstrates that if you use the concept of the nanoscale to create structures and then use those nanostructures like LEGO to construct larger materials, you can obtain nearly any set of properties you want. You can create materials by design."

The researchers use a direct laser writing method called two-photon lithography to "write" a three-dimensional pattern in a polymer by allowing a laser beam to crosslink and harden the polymer wherever it is focused. The parts of the polymer that were exposed to the laser remain intact while the rest is dissolved away, revealing a three-dimensional scaffold. That structure can then be coated with a thin layer of just about any kind of material—a metal, an alloy, a glass, a semiconductor, etc. Then the researchers use another method to etch out the polymer from within the structure, leaving a hollow architecture.

The applications of this technique are practically limitless, Greer says. Since pretty much any material can be deposited on the scaffolds, the method could be particularly useful for applications in optics, energy efficiency, and biomedicine. For example, it could be used to reproduce complex structures such as bone, producing a scaffold out of biocompatible materials on which cells could proliferate.

In the latest work, Greer and her students used the technique to produce what they call three-dimensional nanolattices that are formed by a repeating nanoscale pattern. After the patterning step, they coated the polymer scaffold with a ceramic called alumina (i.e., aluminum oxide), producing hollow-tube alumina structures with walls ranging in thickness from 5 to 60 nanometers and tubes from 450 to 1,380 nanometers in diameter.

Greer's team next wanted to test the mechanical properties of the various nanolattices they created. Using two different devices for poking and prodding materials on the nanoscale, they squished, stretched, and otherwise tried to deform the samples to see how they held up.

They found that the alumina structures with a wall thickness of 50 nanometers and a tube diameter of about 1 micron shattered when compressed. That was not surprising given that ceramics, especially those that are porous, are brittle. However, compressing lattices with a lower ratio of wall thickness to tube diameter—where the wall thickness was only 10 nanometers—produced a very different result.

"You deform it, and all of a sudden, it springs back," Greer says. "In some cases, we were able to deform these samples by as much as 85 percent, and they could still recover."

To understand why, consider that most brittle materials such as ceramics, silicon, and glass shatter because they are filled with flaws—imperfections such as small voids and inclusions. The more perfect the material, the less likely you are to find a weak spot where it will fail. Therefore, the researchers hypothesize, when you reduce these structures down to the point where individual walls are only 10 nanometers thick, both the number of flaws and the size of any flaws are kept to a minimum, making the whole structure much less likely to fail.

"One of the benefits of using nanolattices is that you significantly improve the quality of the material because you're using such small dimensions," Greer says. "It's basically as close to an ideal material as you can get, and you get the added benefit of needing only a very small amount of material in making them."

The Greer lab is now aggressively pursuing various ways of scaling up the production of these so-called meta-materials.

The lead author on the paper, "Strong, Lightweight and Recoverable Three-Dimensional Ceramic Nanolattices," is Lucas R. Meza, a graduate student in Greer's lab. Satyajit Das, who was a visiting student researcher at Caltech, is also a coauthor. The work was supported by funding from the Defense Advanced Research Projects Agency and the Institute for Collaborative Biotechnologies. Greer is also on the board of directors of the Kavli Nanoscience Institute at Caltech.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Tipping the Balance of Behavior

Humans with autism often show a reduced frequency of social interactions and an increased tendency to engage in repetitive solitary behaviors. Autism has also been linked to dysfunction of the amygdala, a brain structure involved in processing emotions. Now Caltech researchers have discovered antagonistic neuron populations in the mouse amygdala that control whether the animal engages in social behaviors or asocial repetitive self-grooming. This discovery may have implications for understanding neural circuit dysfunctions that underlie autism in humans.

This discovery, which is like a "seesaw circuit," was led by postdoctoral scholar Weizhe Hong in the laboratory of David J. Anderson, the Seymour Benzer Professor of Biology at Caltech and an investigator with the Howard Hughes Medical Institute. The work was published online on September 11 in the journal Cell

"We know that there is some hierarchy of behaviors, and they interact with each other because the animal can't exhibit both social and asocial behaviors at the same time. In this study, we wanted to figure out how the brain does that," Anderson says.

Anderson and his colleagues discovered two intermingled but distinct populations of neurons in the amygdala, a part of the brain that is involved in innate social behaviors. One population promotes social behaviors, such as mating, fighting, or social grooming, while the other population controls repetitive self-grooming—an asocial behavior.

Interestingly, these two populations are distinguished according to the most fundamental subdivision of neuron subtypes in the brain: the "social neurons" are inhibitory neurons (which release the neurotransmitter GABA, or gamma-aminobutyric acid), while the "self-grooming neurons" are excitatory neurons (which release the neurotransmitter glutamate, an amino acid).

To study the relationship between these two cell types and their associated behaviors, the researchers used a technique called optogenetics. In optogenetics, neurons are genetically altered so that they express light-sensitive proteins from microbial organisms. Then, by shining a light on these modified neurons via a tiny fiber optic cable inserted into the brain, researchers can control the activity of the cells as well as their associated behaviors.

Using this optogenetic approach, Anderson's team was able to selectively switch on the neurons associated with social behaviors and those linked with asocial behaviors.

With the social neurons, the behavior that was elicited depended upon the intensity of the light signal. That is, when high-intensity light was used, the mice became aggressive in the presence of an intruder mouse. When lower-intensity light was used, the mice no longer attacked, although they were still socially engaged with the intruder—either initiating mating behavior or attempting to engage in social grooming.

When the neurons associated with asocial behavior were turned on, the mouse began self-grooming behaviors such as paw licking and face grooming while completely ignoring all intruders. The self-grooming behavior was repetitive and lasted for minutes even after the light was turned off.

The researchers could also use the light-activated neurons to stop the mice from engaging in particular behaviors. For example, if a lone mouse began spontaneously self-grooming, the researchers could halt this behavior through the optogenetic activation of the social neurons. Once the light was turned off and the activation stopped, the mouse would return to its self-grooming behavior.

Surprisingly, these two groups of neurons appear to interfere with each other's function: the activation of social neurons inhibits self-grooming behavior, while the activation of self-grooming neurons inhibits social behavior. Thus these two groups of neurons seem to function like a seesaw, one that controls whether mice interact with others or instead focus on themselves. It was completely unexpected that the two groups of neurons could be distinguished by whether they were excitatory or inhibitory. "If there was ever an experiment that 'carves nature at its joints,'" says Anderson, "this is it."

This seesaw circuit, Anderson and his colleagues say, may have some relevance to human behavioral disorders such as autism.

"In autism," Anderson says, "there is a decrease in social interactions, and there is often an increase in repetitive, sometimes asocial or self-oriented, behaviors"—a phenomenon known as perseveration. "Here, by stimulating a particular set of neurons, we are both inhibiting social interactions and promoting these perseverative, persistent behaviors."

Studies from other laboratories have shown that disruptions in genes implicated in autism show a similar decrease in social interaction and increase in repetitive self-grooming behavior in mice, Anderson says. However, the current study helps to provide a needed link between gene activity, brain activity, and social behaviors, "and if you don't understand the circuitry, you are never going to understand how the gene mutation affects the behavior." Going forward, he says, such a complete understanding will be necessary for the development of future therapies.

But could this concept ever actually be used to modify a human behavior?

"All of this is very far away, but if you found the right population of neurons, it might be possible to override the genetic component of a behavioral disorder like autism, by just changing the activity of the circuits—tipping the balance of the see-saw in the other direction," he says.

The work was funded by the Simons Foundation, the National Institutes of Health and the Howard Hughes Medical Institute. Caltech coauthors on the paper include Hong, who was the lead author, and graduate student Dong-Wook Kim.

Writer: 
Exclude from News Hub: 
No

Pages

Subscribe to RSS - research_news