NuSTAR Reveals Radioactive Matter in Supernova Remnant

New details suggest how massive stars explode

Using its X-ray vision to observe what is left of a massive star that exploded long ago, NASA's Nuclear Spectroscopic Telescope Array (NuSTAR) spacecraft has shed new light on an old question: How exactly do stars go out with such a bang? For the first time, NuSTAR has mapped radioactive material from the core of such a supernova explosion. The results suggest that the core of the star actually sloshes around before shock waves rip it apart.

Between August 2012 and June 2013, NuSTAR trained its eyes multiple times on the Cassiopeia A (Cas A) remnant—the leftovers of a star that collapsed and exploded more than 11,000 years ago. With the observatory's sensitivity to high-energy X-rays, it was able to image and then map the distribution in Cas A of radioactive titanium-44, an atom produced at the core of the exploding star. Members of the NuSTAR team report the observations in the February 20 issue of the journal Nature.

"We are excited about these new results. Probing supernova explosions is one of the things that NuSTAR was specifically designed to do," says Fiona Harrison, the Benjamin M. Rosen Professor of Physics and Astronomy at Caltech and NuSTAR's principal investigator. "NuSTAR is the only spacecraft currently capable of making the measurements that led to these new insights."

Although other powerful X-ray telescopes, such as NASA's Chandra X-ray Observatory and the European Space Agency's XMM-Newton, have imaged the Cas A remnant before, those observatories can only detect material that has been heated by the explosion. NuSTAR's specially coated optics and newly developed detectors allow it to image at higher energies. So what is particularly exciting about the NuSTAR map is that it shows all of the titanium-44, revealing both the heated and unheated material from the heart of the explosion.

"With NuSTAR we have a new forensic tool to investigate the explosion," says Brian Grefenstette, lead author of the paper, also from Caltech. "Previously, it was hard to interpret what was going on in Cas A because the material that we could see only glows in X-rays when it's heated up. Now that we can see the radioactive material, which glows in X-rays no matter what, we are getting a more complete picture of the core of the explosion."

 
 NuSTAR has provided the first observational evidence in support of a theory that says exploding stars slosh around before detonating. That theory, referred to as mild asymmetries, is shown here in a simulation by Christian Ott, professor of theoretical astrophysics at Caltech.
 

The distribution of titanium-44 that NuSTAR observed suggests that supernova explosions of Cas A's kind are not completely symmetric, nor are they driven by powerful jets, as some had hypothesized. Instead, computer simulations that match the NuSTAR data suggest that stars like Cas A slosh around before exploding and therefore disperse the radioactive material at their cores in a mildly asymmetric way.

"When we try to recreate supernovas with spherical models, the shock wave from the initial collapse of the star's core stalls out," explains Harrison. "Our new results point to strong distortions of a spherical shape as key to the process of reenergizing the blast wave. The exploding star literally sloshes around before detonating."

As revealing as the NuSTAR findings are, they have also created a new mystery for scientists to ponder. Since both the iron and titanium in the remnant originated in the star's core, the researchers had expected to find significant overlap between the titanium-44 map and a previous map based on Chandra's observations of iron in the remnant. Instead, the two did not match up well. So, the researchers say, the case of the Cas A remnant is far from closed.

NuSTAR is a Small Explorer mission led by Caltech and managed by NASA's Jet Propulsion Laboratory (JPL) for NASA's Science Mission Directorate in Washington. Along with Harrison and Grefenstette, additional Caltech coauthors on the paper, "Mapping Cassiopeia A in Radioactive 44Ti: Probing the Explosion's Engine," are Kristin Madsen, Hiromasa Miyasaka, Vikram Rana, and JPL researcher Daniel Stern.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No

A Changing View of Bone Marrow Cells

Caltech researchers show that the cells are actively involved in sensing infection

In the battle against infection, immune cells are the body's offense and defense—some cells go on the attack while others block invading pathogens. It has long been known that a population of blood stem cells that resides in the bone marrow generates all of these immune cells. But most scientists have believed that blood stem cells participate in battles against infection in a delayed way, replenishing immune cells on the front line only after they become depleted.

Now, using a novel microfluidic technique, researchers at Caltech have shown that these stem cells might be more actively involved, sensing danger signals directly and quickly producing new immune cells to join the fight.

"It has been most people's belief that the bone marrow has the function of making these cells but that the response to infection is something that happens locally, at the infection site," says David Baltimore, president emeritus and the Robert Andrews Millikan Professor of Biology at Caltech. "We've shown that these bone marrow cells themselves are sensitive to infection-related molecules and that they respond very rapidly. So the bone marrow is actually set up to respond to infection."

The study, led by Jimmy Zhao, a graduate student in the UCLA-Caltech Medical Scientist Training Program, will appear in the April 3 issue of the journal Cell Stem Cell.

In the work, the researchers show that blood stem cells have all the components needed to detect an invasion and to mount an inflammatory response. They show, as others have previously, that these cells have on their surface a type of receptor called a toll-like receptor. The researchers then identify an entire internal response pathway that can translate activation of those receptors by infection-related molecules, or danger signals, into the production of cytokines, signaling molecules that can crank up immune-cell production. Interestingly, they show for the first time that the transcription factor NF-κB, known to be the central organizer of the immune response to infection, is part of that response pathway.

To examine what happens to a blood stem cell once it is activated by a danger signal, the Baltimore lab teamed up with chemists from the lab of James Heath, the Elizabeth W. Gilloon Professor and professor of chemistry at Caltech. They devised a microfluidic chip—printed in flexible silicon on a glass slide, complete with input and output ports, control valves, and thousands of tiny wells—that would enable single-cell analysis. At the bottom of each well, they attached DNA molecules in strips and introduced a flow of antibodies—pathogen-targeting proteins of the immune system—that had complementary DNA. They then added the stem cells along with infection-related molecules and incubated the whole sample. Since the antibodies were selected based on their ability to bind to certain cytokines, they specifically captured any of those cytokines released by the cells after activation. When the researchers added a secondary antibody and a dye, the cytokines lit up. "They all light up the same color, but you can tell which is which because you've attached the DNA in an orderly fashion," explains Baltimore. "So you've got both visualization and localization that tells you which molecule was secreted." In this way, they were able to measure, for example, that the cytokine IL-6 was secreted most frequently—by 21.9 percent of the cells tested.

"The experimental challenges here were significant—we needed to isolate what are actually quite rare cells, and then measure the levels of a dozen secreted proteins from each of those cells," says Heath. "The end result was sort of like putting on a new pair of glasses—we were able to observe functional properties of these stem cells that were totally unexpected."

The team found that blood stem cells produce a surprising number and variety of cytokines very rapidly. In fact, the stem cells are even more potent generators of cytokines than other previously known cytokine producers of the immune system. Once the cytokines are released, it appears that they are able to bind to their own cytokine receptors or those on other nearby blood stem cells. This stimulates the bound cells to differentiate into the immune cells needed at the site of infection.

"This does now change the view of the potential of bone marrow cells to be involved in inflammatory reactions," says Baltimore.

Heath notes that the collaboration benefited greatly from Caltech's support of interdisciplinary work. "It is a unique and fertile environment," he says, "one that encourages scientists from different disciplines to harness their disparate areas of expertise to solve tough problems like this one."

Additional coauthors on the paper, "Conversion of danger signals into cytokine signals by hematopoietic stem and progenitor cells for regulation of stress-induced hematopoiesis," are Chao Ma, Ryan O'Connell, Arnav Mehta, and Race DiLoreto. The work was supported by grants from the National Institute of Allergy and Infectious Diseases, the National Institutes of Health, a National Research Service Award, the UCLA-Caltech Medical Scientist Training Program, a Rosen Fellowship, a Pathway to Independence Award, and an American Cancer Society Research Grant.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

NOvA Sees First Long-distance Neutrinos

The NOvA experiment, centered at the Department of Energy's Fermi National Accelerator Laboratory (Fermilab) near Chicago has detected its first neutrinos.

Ryan Patterson, assistant professor of physics at Caltech and principal investigator for the Caltech NOvA team of eight researchers, states, "With these first neutrinos in hand, we celebrate the official start of our physics run. The data we collect with NOvA will provide a brand-new window on how neutrino masses arise and relate to one another, and whether there are new physical laws lurking in the neutrino sector of the standard model of particle physics."

Neutrinos are curious particles that travel near the speed of light, rarely interacting with matter. The NOvA experiment, a collaboration of 208 scientists from 38 institutions, is scheduled to run for six years. It includes the Fermilab accelerator and two receivers, one located near Fermilab, the other some 500 miles away in Ash River, Minnesota, near the Canadian border.

Writer: 
Exclude from News Hub: 
No

Is Natural Gas a Solution to Mitigating Climate Change?

Methane, a key greenhouse gas, has more than doubled in volume in Earth's atmosphere since 1750. Its increase is believed to be a leading contributor to climate change. But where is the methane coming from? Research by atmospheric chemist Paul Wennberg of the California Institute of Technology (Caltech) suggests that losses of natural gas—our "cleanest" fossil fuel—into the atmosphere may be a larger source than previously recognized.

Radiation from the sun warms Earth's surface, which then radiates heat back into the atmosphere. Greenhouse gases trap some of this heat. It is this process that makes life on Earth possible for beings such as ourselves, who could not tolerate the lower temperatures Earth would have if not for its "blanket" of greenhouse gases. However, as Goldilocks would tell you, there is "too hot" as well as "too cold," and the precipitous increase in greenhouse gases since the beginning of the Industrial Revolution induces climate change, alters weather patterns, and has increased sea level. Carbon dioxide is the most prevalent greenhouse gas in Earth's atmosphere, but there are others as well, among them methane.

Those who are concerned about greenhouse gases have a very special enemy to fear in atmospheric methane. Methane has a trifecta of effects on the atmosphere. First, like other greenhouse gases, methane works directly to trap Earth's radiation in the atmosphere. Second, when methane oxidizes in Earth's atmosphere, it is broken into components that are also greenhouse gases: carbon dioxide and ozone. Third, the breakdown of methane in the atmosphere produces water vapor, which also functions as a greenhouse gas. Increased humidity, especially in the otherwise arid stratosphere where approximately 10 percent of methane is oxidized, further increases greenhouse-gas induced climate change.

Fully one-third of the increase in radiative forcing (the ability of the atmosphere to retain radiation from the sun) since 1750 is estimated to be due to the presence and effects of methane. Because of the many potential sources of atmospheric methane, from landfills to wetlands to petroleum processing, it can be difficult to quantify which sources are making the greatest contribution. But according to Paul Wennberg, Caltech's R. Stanton Avery Professor of Atmospheric Chemistry and Environmental Science and Engineering, and his colleagues, it is possible that a significant source of methane, at least in the Los Angeles basin, is fugitive emissions—leaks—from the natural-gas supply line.

"This was a surprise," Wennberg explains of the results of his research on methane in the Los Angeles atmosphere. In an initial study conducted in 2008, Wennberg's team analyzed measurements from the troposphere, the lowest portion of Earth's atmosphere, via an airplane flying less than a mile above the ground over the Los Angeles basin.

In analyzing chemical signatures of the preliminary samples, Wennberg's team made an intriguing discovery: the signatures bore a striking similarity to the chemical profile of natural gas. Normally, the methane from fossil fuel sources is accompanied by ethane gas—which is the second most common component of natural gas—while biogenic sources of methane (such as livestock and wastewater) are not. Indeed, the researchers found that the ratio of methane and ethane in the L.A. air samples was characteristic of the samples of natural gas provided by the Southern California Gas Company, which is the leading supplier of natural gas to the region.

Wennberg hesitates to pinpoint natural-gas leaks as the sole source of the L.A. methane, however. "Even though it looks like the methane/ethane could come from fugitive natural-gas emissions, it's certainly not all coming from this source," he says. "We're still drilling for oil in L.A., and that yields natural gas that includes ethane too."

The Southern California Gas Company reports very low losses in the delivery of natural gas (approximately 0.1 percent), and yet atmospheric data suggest that the source of methane from either the natural-gas infrastructure or petroleum production is closer to 2 percent of the total gas delivered to the basin. One possible way to reconcile these vastly different estimates is that significant losses of natural gas may occur after consumer metering in the homes, offices, and industrial plants that purchase natural gas. This loss of fuel is small enough to have no immediate negative impact on household users, but cumulatively it could be a major player in the concentration of methane in the atmosphere.

The findings of Wennberg and his colleagues have led to a more comprehensive study of greenhouse gases in urban settings, the Megacities Carbon Project, based at JPL. The goal of the project, which is focusing initially on ground-based measurements in Los Angeles and Paris, is to quantify greenhouse gases in the megacities of the world. Such cities—places like Hong Kong, Berlin, Jakarta, Johannesburg, Seoul, São Paulo, and Tokyo—are responsible for up to 75 percent of global carbon emissions, despite representing only 3 percent of the world's landmass. Documenting the types and sources of greenhouse gases in megacities will provide valuable baseline measurements that can be used in efforts to reduce greenhouse gas emissions.

If the findings of the Megacities Carbon Project are consistent with Wennberg's study of methane in Los Angeles, natural gas may be less of a panacea in the search for a "green" fuel. Natural gas has a cleaner emissions profile and a higher efficiency than coal (that is, it produces more power per molecule of carbon dioxide), but, as far as climate change goes, methods of extraction and distribution are key. "You have to dig it up, put it in the pipe, and burn it without losing more than a few percent," Wennberg says. "Otherwise, it's not nearly as helpful as you would think."

Wennberg's research was published in an article titled "On the Sources of Methane to the Los Angeles Atmosphere" in Environmental Science & Technology. Data for this study were provided by the Southern California Gas Company, NASA, NOAA, and the California Air Resources Board. The research was funded by NASA, the California Energy Commission's Public Interest Environmental Research program, the California Air Resources Board, and the U.S. Department of Energy.

Writer: 
Cynthia Eller
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech-Developed Method for Delivering HIV-Fighting Antibodies Proven Even More Promising

In 2011, biologists at the California Institute of Technology (Caltech) demonstrated a highly effective method for delivering HIV-fighting antibodies to mice—a treatment that protected the mice from infection by a laboratory strain of HIV delivered intravenously. Now the researchers, led by Nobel Laureate David Baltimore, have shown that the same procedure is just as effective against a strain of HIV found in the real world, even when transmitted across mucosal surfaces.

The findings, which appear in the February 9 advance online publication of the journal Nature Medicine, suggest that the delivery method might be effective in preventing vaginal transmission of HIV between humans.

"The method that we developed has now been validated in the most natural possible setting in a mouse," says Baltimore, president emeritus and the Robert Andrews Millikan Professor of Biology at Caltech. "This procedure is extremely effective against a naturally transmitted strain and by an intravaginal infection route, which is a model of how HIV is transmitted in most of the infections that occur in the world."

The new delivery method—called Vectored ImmunoProphylaxis, or VIP for short—is not exactly a vaccine. Vaccines introduce substances such as antigens into the body to try to get the immune system to mount an appropriate attack—to generate antibodies that can block an infection or T cells that can attack infected cells. In the case of VIP, a small, harmless virus is injected and delivers genes to the muscle tissue, instructing it to generate specific antibodies.  

The researchers emphasize that the work was done in mice and that the leap from mice to humans is large. The team is now working with the Vaccine Research Center at the National Institutes of Health to begin clinical evaluation.

The study, "Vectored immunoprophylaxis protects humanized mice from mucosal HIV transmission," was supported by the UCLA Center for AIDS Research, the National Institutes of Health, and the Caltech-UCLA Joint Center for Translational Medicine. Caltech biology researchers Alejandro B. Balazs, Yong Ouyang, Christin H. Hong, Joyce Chen, and Steven M. Nguyen also contributed to the study, as well as Dinesh S. Rao of the David Geffen School of Medicine at UCLA and Dong Sung An of the UCLA AIDS Institute.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pinpointing the Brain’s Arbitrator

Caltech researchers ID a brain mechanism that weighs decisions

We tend to be creatures of habit. In fact, the human brain has a learning system that is devoted to guiding us through routine, or habitual, behaviors. At the same time, the brain has a separate goal-directed system for the actions we undertake only after careful consideration of the consequences. We switch between the two systems as needed. But how does the brain know which system to give control to at any given moment? Enter The Arbitrator.

Researchers at the California Institute of Technology (Caltech) have, for the first time, pinpointed areas of the brain—the inferior lateral prefrontal cortex and frontopolar cortex—that seem to serve as this "arbitrator" between the two decision-making systems, weighing the reliability of the predictions each makes and then allocating control accordingly. The results appear in the current issue of the journal Neuron.

According to John O'Doherty, the study's principal investigator and director of the Caltech Brain Imaging Center, understanding where the arbitrator is located and how it works could eventually lead to better treatments for brain disorders, such as drug addiction, and psychiatric disorders, such as obsessive-compulsive disorder. These disorders, which involve repetitive behaviors, may be driven in part by malfunctions in the degree to which behavior is controlled by the habitual system versus the goal-directed system.

"Now that we have worked out where the arbitrator is located, if we can find a way of altering activity in this area, we might be able to push an individual back toward goal-directed control and away from habitual control," says O'Doherty, who is also a professor of psychology at Caltech. "We're a long way from developing an actual treatment based on this for disorders that involve over-egging of the habit system, but this finding has opened up a highly promising avenue for further research."

In the study, participants played a decision-making game on a computer while connected to a functional magnetic resonance imaging (fMRI) scanner that monitored their brain activity. Participants were instructed to try to make optimal choices in order to gather coins of a certain color, which were redeemable for money.

During a pre-training period, the subjects familiarized themselves with the game—moving through a series of on-screen rooms, each of which held different numbers of red, yellow, or blue coins. During the actual game, the participants were told which coins would be redeemable each round and given a choice to navigate right or left at two stages, knowing that they would collect only the coins in their final room. Sometimes all of the coins were redeemable, making the task more habitual than goal-directed. By altering the probability of getting from one room to another, the researchers were able to further test the extent of participants' habitual and goal-directed behavior while monitoring corresponding changes in their brain activity.

With the results from those tests in hand, the researchers were able to compare the fMRI data and choices made by the subjects against several computational models they constructed to account for behavior. The model that most accurately matched the experimental data involved the two brain systems making separate predictions about which action to take in a given situation. Receiving signals from those systems, the arbitrator kept track of the reliability of the predictions by measuring the difference between the predicted and actual outcomes for each system. It then used those reliability estimates to determine how much control each system should exert over the individual's behavior. In this model, the arbitrator ensures that the system making the most reliable predictions at any moment exerts the greatest degree of control over behavior.

"What we're showing is the existence of higher-level control in the human brain," says Sang Wan Lee, lead author of the new study and a postdoctoral scholar in neuroscience at Caltech. "The arbitrator is basically making decisions about decisions."

In line with previous findings from the O'Doherty lab and elsewhere, the researchers saw in the brain scans that an area known as the posterior putamen was active at times when the model predicted that the habitual system should be calculating prediction values. Going a step further, they examined the connectivity between the posterior putamen and the arbitrator. What they found might explain how the arbitrator sets the weight for the two learning systems: the connection between the arbitrator area and the posterior putamen changed according to whether the goal-directed or habitual system was deemed to be more reliable. However, no such connection effects were found between the arbitrator and brain regions involved in goal-directed learning.  This suggests that the arbitrator may work mainly by modulating the activity of the habitual system.

"One intriguing possibility arising from these findings, which we will need to test in future work, is that being in a habitual mode of behavior may be the default state," says O'Doherty. "So when the arbitrator determines you need to be more goal-directed in your behavior, it accomplishes this by inhibiting the activity of the habitual system, almost like pressing the breaks on your car when you are in drive."

The paper in Neuron is titled "Neural computations underlying arbitration between model-based and model-free learning." In addition to O'Doherty and Lee, Shinsuke Shimojo, the Gertrude Baltimore Professor of Experimental Psychology at Caltech, is also a coauthor. The work was completed with funding from the National Institutes of Health, the Gordon and Betty Moore Foundation, the Japan Science and Technology Agency, and the Caltech-Tamagawa Global COE Program. 

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No

A Detailed Look at HIV in Action

Researchers gain a better understanding of the virus through electron microscopy

The human intestinal tract, or gut, is best known for its role in digestion. But this collection of organs also plays a prominent role in the immune system. In fact, it is one of the first parts of the body that is attacked in the early stages of an HIV infection. Knowing how the virus infects cells and accumulates in this area is critical to developing new therapies for the over 33 million people worldwide living with HIV. Researchers at the California Institute of Technology (Caltech) are the first to have utilized high-resolution electron microscopy to look at HIV infection within the actual tissue of an infected organism, providing perhaps the most detailed characterization yet of HIV infection in the gut.

The team's findings are described in the January 30 issue of PLOS Pathogens.

"Looking at a real infection within real tissue is a big advance," says Mark Ladinsky, an electron microscope scientist at Caltech and lead author of the paper. "With something like HIV, it's usually very difficult and dangerous to do because the virus is an infectious agent. We used an animal model implanted with human tissue so we can study the actual virus under, essentially, its normal circumstances."

Ladinsky worked with Pamela Bjorkman, Max Delbrück Professor of Biology at Caltech, to take three-dimensional images of normal cells along with HIV-infected tissues from the gut of a mouse model engineered to have a human immune system. The team used a technique called electron tomography, in which a tissue sample is embedded in plastic and placed under a high-powered microscope. Then the sample is tilted incrementally through a course of 120 degrees, and pictures are taken of it at one-degree intervals. All of the images are then very carefully aligned with one another and, through a process called back projection, turned into a 3-D reconstruction that allows different places within the volume to be viewed one pixel at a time.

"Most prior electron microscopy studies of HIV have focused on the virus itself or on infection of laboratory-grown cell cultures," says Bjorkman, who is also an investigator with the Howard Hughes Medical Institute. "Ours is the first major electron microscopy study to look at HIV interacting with other cells in the actual gut tissue of an infected animal model."

By procuring such detailed images, Ladinsky and Bjorkman were able to confirm several observations of HIV made in prior, in vitro studies, including the structure and behavior of the virus as it buds off of infected cells and moves into the surrounding tissue and structural details of HIV budding from cells within an infected tissue. The team also described several novel observations, including the existence of "pools" of HIV in between cells, evidence that HIV can infect new cells both by direct contact or by free viruses in the same tissue, and that pools of HIV can be found deep in the gut.

"The study suggests that an infected cell releases newly formed viruses in a semisynchronous wave pattern," explains Ladinsky. "It doesn't look like one virus buds off and then another in a random way. Rather, it appears that groups of virus bud off from a given cell within a certain time frame and then, a little while later, another group does the same, and then another, and so on."

The team came to this conclusion by identifying single infected cells using electron microscopy. Then they looked for HIV particles at different distances from the original cell and saw that the groups of particles were more mature as their distance from the infected cell increased.

"This finding showed that indeed these cells were producing waves of virus rather than individual ones, which was a neat observation," says Ladinsky.

In addition to producing waves of virus, infected cells are also thought to spread HIV through direct contact with their neighbors. Bjorkman and Ladinsky were able to visualize this phenomenon, known as a virological synapse, using electron microscopy.

"We were able to see one cell producing a viral bud that is contacting the cell next to it, suggesting that it's about to infect directly," Ladinsky says. "The space between those two cells represents the virological synapse."

Finally, the team found pools of HIV accumulating between cells where there was no indication of a virological synapse. This suggested that a virological synapse, which may be protected from some of the body's immune defenses, is not the only way in which HIV can infect new cells. The finding of HIV transfer via free pools of free virus offers hope that treatment with protein-based drugs, such as antibodies, could be an effective means of augmenting or replacing current treatment regimens that use small-molecule antiretroviral drugs.

"We saw these pools of virus in places where we had not initially expected to see them, down deep in the intestine," he explains. "Most of the immune cells in the gut are found higher up, so finding large amounts of the virus in the crypt regions was surprising."

The team will continue their efforts to look at HIV and related viruses under natural conditions using additional animal models, and potentially people.

"The end goal is to look at a native infection in human tissue to get a real picture of how it's working inside the body, and hopefully make a positive difference in fighting this epidemic," says Bjorkman.

Additional authors on the PLOS Pathogens paper, "Electron Tomography of HIV-1 Infection in Gut-Associated Lymphoid Tissue," are Collin Kieffer, a postdoctoral scholar in biology at Caltech; Gregory Olson and Douglas S. Kwon from the Ragon Institute of Massachusetts General Hospital (MGH), MIT, and Harvard; and Maud Deruaz, Vladimir Vrbanac, and Andrew M. Tager from MGH and Harvard Medical School. The work was supported by the Center for the Structural Biology of Cellular Host Elements in Egress, Trafficking and Assembly of HIV (CHEETAH).

 

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Worry on the Brain

Caltech researchers pinpoint neural circuitry that promotes stress-induced anxiety

According to the National Institute of Mental Health, over 18 percent of American adults suffer from anxiety disorders, characterized as excessive worry or tension that often leads to other physical symptoms. Previous studies of anxiety in the brain have focused on the amygdala, an area known to play a role in fear. But a team of researchers led by biologists at the California Institute of Technology (Caltech) had a hunch that understanding a different brain area, the lateral septum (LS), could provide more clues into how the brain processes anxiety. Their instincts paid off—using mouse models, the team has found a neural circuit that connects the LS with other brain structures in a manner that directly influences anxiety.

"Our study has identified a new neural circuit that plays a causal role in promoting anxiety states," says David Anderson, the Seymour Benzer Professor of Biology at Caltech, and corresponding author of the study. "Part of the reason we lack more effective and specific drugs for anxiety is that we don't know enough about how the brain processes anxiety. This study opens up a new line of investigation into the brain circuitry that controls anxiety."

The team's findings are described in the January 30 version of the journal Cell.

Led by Todd Anthony, a senior research fellow at Caltech, the researchers decided to investigate the so-called septohippocampal axis because previous studies had implicated this circuit in anxiety, and had also shown that neurons in a structure located within this axis—the LS—lit up, or were activated, when anxious behavior was induced by stress in mouse models. But does the fact that the LS is active in response to stressors mean that this structure promotes anxiety, or does it mean that this structure acts to limit anxiety responses following stress? The prevailing view in the field was that the nerve pathways that connect the LS with different brain regions function as a brake on anxiety, to dampen a response to stressors. But the team's experiments showed that the exact opposite was true in their system.

In the new study, the team used optogenetics—a technique that uses light to control neural activity—to artificially activate a set of specific, genetically identified neurons in the LS of mice. During this activation, the mice became more anxious. Moreover, the researchers found that even a brief, transient activation of those neurons could produce a state of anxiety lasting for at least half an hour. This indicates that not only are these cells involved in the initial activation of an anxious state, but also that an anxious state persists even after the neurons are no longer being activated.

"The counterintuitive feature of these neurons is that even though activating them causes more anxiety, the neurons are actually inhibitory neurons, meaning that we would expect them to shut off other neurons in the brain," says Anderson, who is also an investigator with the Howard Hughes Medical Institute (HHMI).

So, if these neurons are shutting off other neurons in the brain, then how can they increase anxiety? The team hypothesized that the process might involve a double-inhibitory mechanism: two negatives make a positive. When they took a closer look at exactly where the LS neurons were making connections in the brain, they saw that they were inhibiting other neurons in a nearby area called the hypothalamus. Importantly, most of those hypothalamic neurons were, themselves, inhibitory neurons. Moreover, those hypothalamic inhibitory neurons, in turn, connected with a third brain structure called the paraventricular nucleus, or PVN. The PVN is well known to control the release of hormones like cortisol in response to stress and has been implicated in anxiety.

This anatomical circuit seemed to provide a potential double-inhibitory pathway through which activation of the inhibitory LS neurons could lead to an increase in stress and anxiety. The team reasoned that if this hypothesis were true, then artificial activation of LS neurons would be expected to cause an increase in stress hormone levels, as if the animal were stressed. Indeed, optogenetic activation of the LS neurons increased the level of circulating stress hormones, consistent with the idea that the PVN was being activated. Moreover, inhibition of LS projections to the hypothalamus actually reduced the rise in cortisol when the animals were exposed to stress. Together these results strongly supported the double-negative hypothesis.

"The most surprising part of these findings is that the outputs from the LS, which were believed primarily to act as a brake on anxiety, actually increase anxiety," says Anderson.

Knowing the sign—positive or negative—of the effect of these cells on anxiety, he says, is a critical first step to understanding what kind of drug one might want to develop to manipulate these cells or their molecular constituents. If the cells had been found to inhibit anxiety, as originally thought, then one would want to find drugs that activate these LS neurons, to reduce anxiety. However, since the group found that these neurons instead promote anxiety, then to reduce anxiety a drug would have to inhibit these neurons.

"We are still probably a decade away from translating this very basic research into any kind of therapy for humans, but we hope that the information that this type of study yields about the brain will put the field and medicine in a much better position to develop new, rational therapies for psychiatric disorders," says Anderson. "There have been very few new psychiatric drugs developed in the last 40 to 50 years, and that's because we know so little about the brain circuitry that controls the emotions that go wrong in a psychiatric disorder like depression or anxiety."

The team will continue to map out this area of the brain in greater detail to understand more about its role in controlling stress-induced anxiety.

"There is no shortage of new questions that have been raised by these findings," Anderson says. "It may seem like all that we've done here is dissect a tiny little piece of brain circuitry, but it's a foothold onto a very big mountain. You have to start climbing someplace."

Additional authors on the Cell paper, "Control of Stress-Induced Persistent Anxiety by an Extra-Amygdala Septohypothalamic Circuit," are Walter Lerchner from the National Institutes of Health (NIH), Nick Dee and Amy Bernard from the Allen Institute for Brain Science, and Nathaniel Heintz from The Rockefeller University and HHMI. The work was supported by NIH, HHMI, and the Beckman Institute at Caltech.

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

From Rivers to Landslides: Charting the Slopes of Sediment Transport

In the Earth Surface Dynamics Lab at the California Institute of Technology (Caltech) the behavior of rivers is modeled through the use of artificial rivers—flumes—through which water can be pumped at varying rates over a variety of carefully graded sediments while drag force and acceleration are measured. The largest flume is a 12-meter tilting version that can model many river conditions; another flume models the languid process of a nearly flat river bed forming a delta as it reaches a pool. Additional flumes are constructed in the lab on an as-needed basis, as in a recent study testing sediment transport in very steep channels.

One such newly constructed flume demonstrates that the slope of streambeds has dramatic and unexpected effects on sediment transport. Logic would suggest that steeper streambeds should allow for easy sediment transport since, as the angle of the slope increases, gravity should assist with moving water and sediment downstream. But experimental data from the flume lab show that gravity does not facilitate sediment transport in the expected manner. Furthermore, in very steep streambeds with a 22-degree or higher slope, sediment motion begins not with grains skipping and bouncing along the bottom of the streambed, but rather with a complete bed failure in which all the sediment is abruptly sent hurtling downstream as a debris flow.

"Most previous work was done on low-gradient channels with a gentle slope," says Michael P. Lamb, assistant professor of geology at Caltech. "These are the rivers, like the Mississippi, where people live and pilot boats, and where we worry about flooding. Low-gradient channels have been studied by civil engineers for hundreds of years." Much less attention has been paid to steeper mountain channels, in part because they are more difficult to study. "Counterintuitively, in steep channels sediment rarely moves, and when it does it is extremely dangerous to measure since it typically includes boulders and large cobbles," explains Lamb.

And so Lamb, along with Caltech graduate student Jeff Prancevic and staff scientist Brian Fuller, set out to model the behavior of steep channels on an artificial watercourse—a flume—that they created for just this purpose. They intentionally removed key variables that occur in nature, such as unevenness in grain size and in the streambed itself (in steep channels there are often varying slopes with waterfalls and pools), so that they could concentrate solely on the effect of bed slope on sediment transport. They created a uniform layer of gravel on the bed of the flume and then began running water down it in increasing quantities, measuring how much water was required to initiate sediment motion. Gradually they tilted the flume to steeper angles, continuing to observe when and how sediment moved as water was added to the system.

Based on studies of sediment motion in low-gradient channels, geologists have long assumed that there is a linear relation between a watercourse's slope and the stress placed by water and gravity on the streambed. That is, as the angle of the streambed increases, the quantity of water required to move sediment should decrease in a simple 1-to-1 ratio. Lamb and Prancevic's flume experiments did indeed show that steeper slopes require less water to move sediment than flatter streambeds. But contrary to earlier predictions, one cannot simply raise the slope by, say, 2 percent while decreasing the water depth by 2 percent and see the same pattern of sediment transport. Instead, as the flume tilted upward in these experiments, a proportionately greater amount of water was needed to initiate sediment motion. By the time the flume was tilted to a slope of 20 degrees, five times the depth of water as previously predicted was needed to move the gravel downstream.

At one level, this experimental data squares with field observations. "If you go out to the Mississippi," says Lamb, "sand is moving almost all the time along the bed of the river. But in mountain channels, the sediment that makes up the bed of the river very rarely moves except during extreme flood events. This sediment is inherently more stable, which is the opposite of what you might expect." The explanation for why this is the case seems to lie with the uneven terrain and shallow waters common to streams in steep mountain terrain.

Experiments with the tilting flume also allowed Lamb and Prancevic to simulate important transitions in sediment transport: from no motion at all, to normal fluvial conditions in which sediment rolls along the streambed, to bed failure, in which the entire sediment bed gives way in a debris flow, stripping the channel down to bedrock. The researchers found that with lower slopes, as the water discharge was increased, individual grains of sediment began to break free and tumble along the flume bed; this pattern is common to the sediment-movement processes of low-gradient riverbeds. As the slope increased, the sediment became more stable, requiring proportionately more water to begin sediment transport. Eventually, the slope reached a transition zone where regular river processes were completely absent. In these steeply sloped flumes, the first sediment motion that occurred represented a complete bed failure, in which all of the grains slid down the channel en masse. "This suggests that there's a certain slope, around 22 degrees in our experiments, where sediment is the most stable, but these channel slopes are also potentially the most dangerous because here the sediment bed can fail catastrophically in rare, large-magnitude flood events," Lamb explains.

Researchers previously believed that debris flows in mountain terrain primarily derived from rainfall-triggered landslides flowing into watercourses from surrounding hillsides. However, the flume-lab experiments suggest that a debris flow can occur in a steep river channel in the absence of such a landslide, simply as a result of increased water discharge over the streambed.

"Understanding when and how sediment first moves at different channel slopes can be used to predict the occurrence of debris flows which affect people and infrastructure," Lamb says. There are other, wide-ranging implications. For example, some fish, like salmon, build their nests only in gravel of a certain size, he notes, and so, "as rivers are increasingly being restored for fish habitat, it is important to know what slopes and flow depths will preserve a particular size of gravel on the riverbed." In addition, he adds, "a better understanding of sediment transport can be used to reconstruct environments of Earth's past or on other planets, such as Mars, through observations of previously moved sediment, now preserved in deposits."

The paper, "Incipient sediment motion across the river to debris-flow transition," appears in the journal Geology. Funding was provided by the National Science Foundation, the Terrestrial Hazard Observation and Reporting Center at Caltech, and the Keck Institute for Space Studies.

Writer: 
Cynthia Eller
Writer: 
Exclude from News Hub: 
No

Galaxies on FIRE: Star Feedback Results in Less Massive Galaxies

For decades, astrophysicists have encountered a puzzling contradiction: although many galactic-wind models—simulations of how matter is distributed in our universe—predict that the majority of the "normal" matter exists in stars at the center of galaxies, in actuality these stars account for less than 10 percent of the matter in the universe. A new set of simulations offer insight into this mismatch between the models and reality: the energy released by individual stars within galaxies can have a substantial effect on where matter is located in the universe.

The Feedback in Realistic Environments, or FIRE, project is the culmination of a multiyear, multiuniversity effort that—for the first time—simulates the evolution of galaxies from shortly after the Big Bang through today. The first simulation to factor in the realistic effects of stars on their galaxies, FIRE results suggest that the radiation from stars is powerful enough to push matter out of galaxies. And this push is enough to account for the "missing" galactic mass in previous calculations, says Philip Hopkins, assistant professor of theoretical astrophysics at the California Institute of Technology (Caltech) and lead author of a paper resulting from the project.

"People have guessed for a long time that the 'missing physics' in these models was what we call feedback from stars," Hopkins says. "When stars form, they should have a dramatic impact on the galaxies in which they arise, through the radiation they emit, the winds they blow off of their surfaces, and their explosions as supernovae. Previously, it has not been possible to directly follow any of these processes within a galaxy, so the earlier models simply estimated—indirectly—the impact of these effects."

By incorporating the data of individual stars into whole-galaxy models, Hopkins and his colleagues can look at the actual effects of star feedback—how radiation from stars "pushes" on galactic matter—in each of the galaxies they study. With new and improved computer codes, Hopkins and his colleagues can now focus their model on specific galaxies, using what are called zoom-in simulations. "Zoom-in simulations allow you to 'cut out' and study just the region of the universe—a few million light-years across, for example—around what's going to become the galaxy you care about," he says. "It would be crazy expensive to run simulations of the entire universe—about 50 billion light-years across—all at once, so you just pick one galaxy at a time, and you concentrate all of your resolution there."

A zoomed-in view of evolving stars within galaxies allows the researchers to see the radiation from stars and supernovae explosions blowing large amounts of material out of those galaxies. When they calculate the amount of matter lost from the galaxies during these events, that feedback from stars in the simulation accurately accounts for the low masses that have been actually observed in real galaxies. "The big thing that we are able to explain is that real galaxies are much less massive than they would be if these feedback processes weren't operating," he says. "So if you care about the structure of a galaxy, you really need to care about star formation and supernovae—and the effect of their feedback on the galaxy."

But once stars push this matter out of the galaxy, where does it go?

That's a good question, Hopkins says—and one that the researchers hope to answer by combining their simulations with new observations in the coming months.

"Stars and supernovae seem to produce these galactic superwinds that blow material out into what we call the circum- and intergalactic medium—the space around and between galaxies. It's really timely for us because there are a lot of new observations of the gas in this intergalactic medium right now, many of them coming from Caltech," Hopkins says. "For example, people have recently found that there are more heavy elements floating around a couple hundred thousand light-years away from a galaxy than are actually inside the galaxy itself. You can track the lost matter by finding these heavy elements; we know they are only made in the fusion in stars, so they had to be inside a galaxy at some point. This fits in with our picture and we can now actually start to map out where this stuff is going."

Although the FIRE simulations can accurately account for the low mass of small- to average-size galaxies, the physics included, as in previous models, can't explain all of the missing mass in very large galaxies—like those larger than our Milky Way. Hopkins and his colleagues have hypothesized that black holes at the centers of these large galaxies might release enough energy to push out the rest of the matter not blown out by stars. "The next step for the simulations is accounting for the energy from black holes that we've mostly ignored for now," he says.

The information provided by the FIRE simulations shows that feedback from stars can alter the growth and history of galaxies in a much more dramatic way than anyone had previously anticipated, Hopkins says. "We've just begun to explore these new surprises, but we hope that these new tools will enable us to study a whole host of open questions in the field."

These results were submitted to the Monthly Notices of the Royal Astronomical Society on November 8, 2013 in a paper titled "Galaxies on FIRE (Feedback In Realistic Environments): Stellar Feedback Explains Cosmologically Inefficient Star Formation." In addition to Hopkins, other authors on the paper include Duìan Kereì, UC San Diego; José Oñorbe and James S. Bullock, UC Irvine; Claude-André Faucher-Giguère, Northwestern University; Eliot Quataert, UC Berkeley; and Norman Murray, the Canadian Institute for Theoretical Astrophysics. Hopkins's work was funded by the National Science Foundation and a NASA Einstein Postdoctoral Fellowship, as well as the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news