NuSTAR Discovers Impossibly Bright Dead Star

X-ray source in the Cigar Galaxy is the first ultraluminous pulsar ever detected

Astronomers working with NASA's Nuclear Spectroscopic Telescope Array (NuSTAR), led by Caltech's Fiona Harrison, have found a pulsating dead star beaming with the energy of about 10 million suns. The object, previously thought to be a black hole because it is so powerful, is in fact a pulsar—the incredibly dense rotating remains of a star.

"This compact little stellar remnant is a real powerhouse. We've never seen anything quite like it," says Harrison, NuSTAR's principal investigator and the Benjamin M. Rosen Professor of Physics at Caltech. "We all thought an object with that much energy had to be a black hole."

Dom Walton, a postdoctoral scholar at Caltech who works with NuSTAR data, says that with its extreme energy, this pulsar takes the top prize in the weirdness category. Pulsars are typically between one and two times the mass of the sun. This new pulsar presumably falls in that same range but shines about 100 times brighter than theory suggests something of its mass should be able to.

"We've never seen a pulsar even close to being this bright," Walton says. "Honestly, we don't know how this happens, and theorists will be chewing on it for a long time." Besides being weird, the finding will help scientists better understand a class of very bright X-ray sources, called ultraluminous X-ray sources (ULXs).

Harrison, Walton, and their colleagues describe NuSTAR's detection of this first ultraluminous pulsar in a paper that appears in the current issue of Nature.

"This was certainly an unexpected discovery," says Harrison. "In fact, we were looking for something else entirely when we found this."

Earlier this year, astronomers in London detected a spectacular, once-in-a-century supernova (dubbed SN2014J) in a relatively nearby galaxy known as Messier 82 (M82), or the Cigar Galaxy, 12 million light-years away. Because of the rarity of that event, telescopes around the world and in space adjusted their gaze to study the aftermath of the explosion in detail.


This animation shows a neutron star—the core of a star that exploded in a massive supernova. This particular neutron star is known as a pulsar because it sends out rotating beams of X-rays that sweep past Earth like lighthouse beacons. (Credit: NASA/JPL-Caltech)

Besides the supernova, M82 harbors a number of other ULXs. When Matteo Bachetti of the Université de Toulouse in France, the lead author of this new paper, took a closer look at these ULXs in NuSTAR's data, he discovered that something in the galaxy was pulsing, or flashing light.

"That was a big surprise," Harrison says. "For decades everybody has thought these ultraluminous X-ray sources had to be black holes. But black holes don't have a way to create this pulsing."

But pulsars do. They are like giant magnets that emit radiation from their magnetic poles. As they rotate, an outside observer with an X-ray telescope, situated at the right angle, would see flashes of powerful light as the beam swept periodically across the observer's field of view, like a lighthouse beacon.

The reason most astronomers had assumed black holes were powering ULXs is that these X-ray sources are so incredibly bright. Black holes can be anywhere from 10 to billions of times the mass of the sun, making their gravitational tug much stronger than that of a pulsar. As matter falls onto the black hole the gravitational energy turns it to heat, which creates X-ray light. The bigger the black hole, the more energy there is to make the object shine.

Surprised to see the flashes coming from M82, the NuSTAR team checked and rechecked the data. The flashes were really there, with a pulse showing up every 1.37 seconds.

The next step was to figure out which X-ray source was producing the flashes. Walton and several other Caltech researchers analyzed the data from NuSTAR and a second NASA X-ray telescope, Chandra, to rule out about 25 different X-ray sources, finally settling on a ULX known as M82X-2 as the source of the flashes.

With the pulsar and its location within M82 identified, there are still many questions left to answer. It is many times higher than the Eddington limit, a basic physics guideline that sets an upper limit on the brightness that an object of a given mass should be able to achieve.

"This is the most extreme violation of that limit that we've ever seen," says Walton. "We have known that things can go above that by a small amount, but this blows that limit away."

NuSTAR is particularly well-suited to make discoveries like this one. Not only does the space telescope see high-energy X-rays, but it sees them in a unique way. Rather than snapping images the way that your cell-phone camera does—by integrating the light such that images blur if you move—NuSTAR detects individual particles of X-ray light and marks when they are measured. That allows the team to do timing analyses and, in this case, to see that the light from the ULX was coming in pulses.

Now that the NuSTAR team has shown that this ULX is a pulsar, Harrison points out that many other known ULXs may in fact be pulsars as well. "Everybody had assumed all of these sources were black holes," she says. "Now I think people have to go back to the drawing board and decide whether that's really true. This could just be a very unique, strange object, or it could be that they're not that uncommon. We just don't know. We need more observations to see if other ULXs are pulsing."

Along with Harrison and Walton, additional Caltech authors on the paper, "An Ultraluminous X-ray Source Powered by An Accreting Neutron Star," are postdoctoral scholars Felix Fürst, and Shriharsh Tendulkar; research scientists Brian W. Grefenstette and Vikram Rana; and Shri Kulkarni, the John D. and Catherine T. MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories. The work was supported by NASA and made use of data supplied by the UK Swift Science Data Centre at the University of Leicester.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Swimming Sea-Monkeys Reveal How Zooplankton May Help Drive Ocean Circulation

Brine shrimp, which are sold as pets known as Sea-Monkeys, are tiny—only about half an inch long each. With about 10 small leaf-like fins that flap about, they look as if they could hardly make waves.

But get billions of similarly tiny organisms together and they can move oceans.

It turns out that the collective swimming motion of Sea-Monkeys and other zooplankton—swimming plankton—can generate enough swirling flow to potentially influence the circulation of water in oceans, according to a new study by Caltech researchers.

The effect could be as strong as those due to the wind and tides, the main factors that are known to drive the up-and-down mixing of oceans, says John Dabiri, professor of aeronautics and bioengineering at Caltech. According to the new analysis by Dabiri and mechanical engineering graduate student Monica Wilhelmus, organisms like brine shrimp, despite their diminutive size, may play a significant role in stirring up nutrients, heat, and salt in the sea—major components of the ocean system.

In 2009, Dabiri's research team studied jellyfish to show that small animals can generate flow in the surrounding water. "Now," Dabiri says, "these new lab experiments show that similar effects can occur in organisms that are much smaller but also more numerous—and therefore potentially more impactful in regions of the ocean important for climate."

The researchers describe their findings in the journal Physics of Fluids.

Brine shrimp (specifically Artemia salina) can be found in toy stores, as part of kits that allow you to raise a colony at home. But in nature, they live in bodies of salty water, such as the Great Salt Lake in Utah. Their behavior is cued by light: at night, they swim toward the surface to munch on photosynthesizing algae while avoiding predators. During the day, they sink back into the dark depths of the water.


A. salina (a species of brine shrimp, commonly known as Sea-Monkeys) begin a vertical migration, stimulated by a vertical blue laser light.

To study this behavior in the laboratory, Dabiri and Wilhelmus use a combination of blue and green lasers to induce the shrimp to migrate upward inside a big tank of water. The green laser at the top of the tank provides a bright target for the shrimp to swim toward while a blue laser rising along the side of the tank lights up a path to guide them upward.

The tank water is filled with tiny, silver-coated hollow glass spheres 13 microns wide (about one-half of one-thousandth of an inch). By tracking the motion of those spheres with a high-speed camera and a red laser that is invisible to the organisms, the researchers can measure how the shrimp's swimming causes the surrounding water to swirl.

Although researchers had proposed the idea that swimming zooplankton can influence ocean circulation, the effect had never been directly observed, Dabiri says. Past studies could only analyze how individual organisms disturb the water surrounding them.

But thanks to this new laser-guided setup, Dabiri and Wilhelmus have been able to determine that the collective motion of the shrimp creates powerful swirls—stronger than would be produced by simply adding up the effects produced by individual organisms.

Adding up the effect of all of the zooplankton in the ocean—assuming they have a similar influence—could inject as much as a trillion watts of power into the oceans to drive global circulation, Dabiri says. In comparison, the winds and tides contribute a combined two trillion watts.

Using this new experimental setup will enable future studies to better untangle the complex relationships between swimming organisms and ocean currents, Dabiri says. "Coaxing Sea-Monkeys to swim when and where you want them to is even more difficult than it sounds," he adds. "But Monica was undeterred over the course of this project and found a creative solution to a very challenging problem."

The title of the Physics of Fluids paper is "Observations of large-scale fluid transport by laser-guided plankton aggregations." The research was supported by the U.S.-Israel Binational Science Foundation, the Office of Naval Research, and the National Science Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Variability Keeps The Body In Balance

Although the heart beats out a very familiar "lub-dub" pattern that speeds up or slows down as our activity increases or decreases, the pattern itself isn't as regular as you might think. In fact, the amount of time between heartbeats can vary even at a "constant" heart rate—and that variability, doctors have found, is a good thing.

Reduced heart rate variability (HRV) has been found to be predictive of a number of illnesses, such as congestive heart failure and inflammation. For athletes, a drop in HRV has also been linked to fatigue and overtraining. However, the underlying physiological mechanisms that control HRV—and exactly why this variation is important for good health—are still a bit of a mystery.

By combining heart rate data from real athletes with a branch of mathematics called control theory, a collaborative team of physicians and Caltech researchers from the Division of Engineering and Applied Science have now devised a way to better understand the relationship between HRV and health—a step that could soon inform better monitoring technologies for athletes and medical professionals.

The work was published in the August 19 print issue of the Proceedings of the National Academy of Sciences.

To run smoothly, complex systems, such as computer networks, cars, and even the human body, rely upon give-and-take connections and relationships among a large number of variables; if one variable must remain stable to maintain a healthy system, another variable must be able to flex to maintain that stability. Because it would be too difficult to map each individual variable, the mathematics and software tools used in control theory allow engineers to summarize the ups and downs in a system and pinpoint the source of a possible problem.

Researchers who study control theory are increasingly discovering that these concepts can also be extremely useful in studies of the human body. In order for a body to work optimally, it must operate in an environment of stability called homeostasis. When the body experiences stress—for example, from exercise or extreme temperatures—it can maintain a stable blood pressure and constant body temperature in part by dialing the heart rate up or down. And HRV plays an important role in maintaining this balance, says study author John Doyle, the Jean-Lou Chameau Professor of Control and Dynamical Systems, Electrical Engineering, and Bioengineering.

"A familiar related problem is in driving," Doyle says. "To get to a destination despite varying weather and traffic conditions, any driver—even a robotic one—will change factors such as acceleration, braking, steering, and wipers. If these factors suddenly became frozen and unchangeable while the car was still moving, it would be a nearly certain predictor that a crash was imminent. Similarly, loss of heart rate variability predicts some kind of malfunction or 'crash,' often before there are any other indications," he says.

To study how HRV helps maintain this version of "cruise control" in the human body, Doyle and his colleagues measured the heart rate, respiration rate, oxygen consumption, and carbon dioxide generation of five healthy young athletes as they completed experimental exercise routines on stationary bicycles.

By combining the data from these experiments with standard models of the physiological control mechanisms in the human body, the researchers were able to determine the essential tradeoffs that are necessary for athletes to produce enough power to maintain an exercise workload while also maintaining the internal homeostasis of their vital signs.

"For example, the heart, lungs, and circulation must deliver sufficient oxygenated blood to the muscles and other organs while not raising blood pressure so much as to damage the brain," Doyle says. "This is done in concert with control of blood vessel dilation in the muscles and brain, and control of breathing. As the physical demands of the exercise change, the muscles must produce fluctuating power outputs, and the heart, blood vessels, and lungs must then respond to keep blood pressure and oxygenation within narrow ranges."

Once these trade-offs were defined, the researchers then used control theory to analyze the exercise data and found that a healthy heart must maintain certain patterns of variability during exercise to keep this complicated system in balance.  Loss of this variability is a precursor of fatigue, the stress induced by exercise. Today, some HRV monitors in the clinic can let a doctor know when variability is high or low, but they provide little in the way of an actionable diagnosis.

Because monitors in hospitals can already provide HRV levels and dozens of other signals and readings, the integration of such mathematical analyses of control theory into HRV monitors could, in the future, provide a way to link a drop in HRV to a more specific and treatable diagnosis. In fact, one of Doyle's students has used an HRV application of control theory to better interpret traditional EKG signals.

Control theory could also be incorporated into the HRV monitors used by athletes to prevent fatigue and injury from overtraining, he says.

"Physicians who work in very data-intensive settings like the operating room or ICU are in urgent need of ways to rapidly and acutely interpret the data deluge," says Marie Csete, MD (PhD, '00), chief scientific officer at the Huntington Medical Research Institutes and a coauthor on the paper. "We hope this work is a first step in a larger research program that helps physicians make better use of data to care for patients."

This study is not the first to apply control theory in medicine. Control theory has already informed the design of a wearable artificial pancreas for type 1 diabetic patients and an automated prototype device that controls the administration of anesthetics during surgery. Nor will it be the last, says Doyle, whose sights are next set on using control theory to understand the progression of cancer.

"We have a new approach, similarly based on control of networks, that organizes and integrates a bunch of new ideas floating around about the role of healthy stroma—non-tumor cells present in tumors—in promoting cancer progression," he says.

"Based on discussions with Dr. Peter Lee at City of Hope [a cancer research and treatment center], we now understand that the non-tumor cells interact with the immune system and with chemotherapeutic drugs to modulate disease progression," Doyle says. "And I'm hoping there's a similar story there, where thinking rigorously about the tradeoffs in development, regeneration, inflammation, wound healing, and cancer will lead to new insights and ultimately new therapies."

Other Caltech coauthors on the study include former graduate students Na Li (PhD '13) now an assistant professor at Harvard; Somayeh Sojoudi (PhD '12), currently at NYU; and graduate students Chenghao Simon Chien and Jerry Cruz. Other collaborators on the study were Benjamin Recht, a former postdoctoral scholar in Doyle's lab and now an assistant professor at UC Berkeley; Daniel Bahmiller, a clinician training in public health; and David Stone, MD, an expert in ICU medicine from the University of Virginia School of Medicine.

Writer: 
Exclude from News Hub: 
No

A New Way to Prevent the Spread of Devastating Diseases

For decades, researchers have tried to develop broadly effective vaccines to prevent the spread of illnesses such as HIV, malaria, and tuberculosis. While limited progress has been made along these lines, there are still no licensed vaccinations available that can protect most people from these devastating diseases.

So what are immunologists to do when vaccines just aren't working?

At Caltech, Nobel Laureate David Baltimore and his colleagues have approached the problem in a different way. Whereas vaccines introduce substances such as antigens into the body hoping to illicit an appropriate immune response—the generation of either antibodies that might block an infection or T cells capable of attacking infected cells—the Caltech team thought: Why not provide the body with step-by-step instructions for producing specific antibodies that have been shown to neutralize a particular disease?

The method they developed—originally to trigger an immune response to HIV—is called vectored immunoprophylaxis, or VIP. The technique was so successful that it has since been applied to a number of other infectious diseases, including influenza, malaria, and hepatitis C.

"It is enormously gratifying to us that this technique can have potentially widespread use for the most difficult diseases that are faced particularly by the less developed world," says Baltimore, president emeritus and the Robert Andrews Millikan Professor of Biology at Caltech.

VIP relies on the prior identification of one or more antibodies that are able to prevent infection in laboratory tests by a wide range of isolated samples of a particular pathogen. Once that has been done, researchers can incorporate the genes that encode those antibodies into an adeno-associated virus (AAV), a small, harmless virus that has been useful in gene-therapy trials. When the AAV is injected into muscle tissue, the genes instruct the muscle tissue to generate the specified antibodies, which can then enter the circulation and protect against infection.

In 2011, the Baltimore group reported in Nature that they had used the technique to deliver antibodies that effectively protected mice from HIV infection. Alejandro Balazs was lead author on that paper and was a postdoctoral scholar in the Baltimore lab at the time.

"We expected that at some dose, the antibodies would fail to protect the mice, but it never did—even when we gave mice 100 times more HIV than would be needed to infect seven out of eight mice," said Balazs, now at the Ragon Institute of MGH, MIT and Harvard. "All of the exposures in this work were significantly larger than a human being would be likely to encounter."

At the time, the researchers noted that the leap from mice to humans is large but said they were encouraged by the high levels of antibodies the mice were able to produce after a single injection and how effectively the mice were protected from HIV infection for months on end. Baltimore's team is now working with a manufacturer to produce the materials needed for human clinical trials that will be conducted by the Vaccine Research Center at the National Institutes of Health.

Moving on from HIV, the Baltimore lab's next goal was protection against influenza A. Although reasonably effective influenza vaccines exist, each year more than 20,000 deaths, on average, are the result of seasonal flu epidemics in the United States. We are encouraged to get flu shots every fall because the influenza virus is something of a moving target—it evolves to avoid resistance. There are also many different strains of influenza A (e.g. H1N1 and H3N2), each incorporating a different combination of the various forms of the proteins hemagglutinin (H) and neuraminidase (N). To chase this target, the vaccine is reformulated each year, but sometimes it fails to prevent the spread of the strains that are prevalent that year.

But about five years ago, researchers began identifying a new class of anti-influenza antibodies that are able to prevent infection by many, many strains of the virus. Instead of binding to the head of the influenza virus, as most flu-fighting antibodies do, these new antibodies target the stalk that holds up the head. And while the head is highly adaptable—meaning that even when mutations occur there, the virus can often remain functional—the stalk must basically remain the same in order for the virus to survive. So these stalk antibodies are very hard for the virus to mutate against.

In 2013, the Baltimore group stitched the genes for two of these new antibodies into an AAV and showed that mice injected with the vector were protected against multiple flu strains, including all H1, H2, and H5 influenza strains tested. This was even true of older mice and those without a properly functioning immune system—a particularly important finding considering that most deaths from the flu occur in the elderly and immunocompromised populations. The group reported its results in the journal Nature Biotechnology.

"We have shown that we can protect mice completely against flu using a kind of antibody that doesn't need to be changed every year," says Baltimore. "It is important to note that this has not been tested in humans, so we do not yet know what concentration of antibody can be produced by VIP in humans. However, if it works as well as it does in mice, VIP may provide a plausible approach to protect even the most vulnerable patients against epidemic and pandemic influenza."

Now that the Baltimore lab has shown VIP to be so effective, other groups from around the country have adopted the Caltech-developed technique to try to ward off malaria, hepatitis C, and tuberculosis.

In August, a team led by Johns Hopkins Bloomberg School of Public Health reported in the Proceedings of the National Academy of Sciences (PNAS) that as many as 70 percent of mice that they had injected by the VIP procedure were protected from infection with malaria by Plasmodium falciparum, the parasite that carries the most lethal of the four types of the disease. A subset of mice in the study produced particularly high levels of the disease-fighting antibodies. In those mice, the immunization was 100 percent effective.

"This is also just a first-generation antibody," says Baltimore, who was a coauthor on the PNAS study. "Knowing now that you can get this kind of protection, it's worth trying to get much better antibodies, and I trust that people in the malaria field will do that."

Most recently, a group led by researchers from The Rockefeller University showed that three hepatitis-C-fighting antibodies delivered using VIP were able to protect mice efficiently from the virus. The results were published in the September 17 issue of the journal Science Translational Medicine. The researchers also found that the treatment was able to temporarily clear the virus from mice that had already been infected. Additional work is needed to determine how to prevent the disease from relapsing. Interestingly, though, the work suggests that the antibodies that are effective against hepatitis C, once it has taken root in the liver, may work by protecting uninfected liver cells from infection while allowing already infected cells to be cleared from the body.    

An additional project is currently evaluating the use of VIP for the prevention of tuberculosis—a particular challenge given the lack of proven tuberculosis-neutralizing antibodies.

"When we started this work, we imagined that it might be possible to use VIP to fight other diseases, so it has been very exciting to see other groups adopting the technique for that purpose," Baltimore says. "If we can get positive clinical results in humans with HIV, we think that would really encourage people to think about using VIP for these other diseases."

Baltimore's work is supported by funding from the National Institute of Allergy and Infectious Disease, the Bill and Melinda Gates Foundation, the Caltech-UCLA Joint Center for Translational Medicine, and a Caltech Translational Innovation Partnership Award.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Sensing Neuronal Activity With Light

For years, neuroscientists have been trying to develop tools that would allow them to clearly view the brain's circuitry in action—from the first moment a neuron fires to the resulting behavior in a whole organism. To get this complete picture, neuroscientists are working to develop a range of new tools to study the brain. Researchers at Caltech have developed one such tool that provides a new way of mapping neural networks in a living organism.

The work—a collaboration between Viviana Gradinaru (BS '05), assistant professor of biology and biological engineering, and Frances Arnold, the Dick and Barbara Dickinson Professor of Chemical Engineering, Bioengineering and Biochemistry—was described in two separate papers published this month.

When a neuron is at rest, channels and pumps in the cell membrane maintain a cell-specific balance of positively and negatively charged ions within and outside of the cell resulting in a steady membrane voltage called the cell's resting potential. However, if a stimulus is detected—for example, a scent or a sound—ions flood through newly open channels causing a change in membrane voltage. This voltage change is often manifested as an action potential—the neuronal impulse that sets circuit activity into motion.

The tool developed by Gradinaru and Arnold detects and serves as a marker of these voltage changes.

"Our overarching goal for this tool was to achieve sensing of neuronal activity with light rather than traditional electrophysiology, but this goal had a few prerequisites," Gradinaru says. "The sensor had to be fast, since action potentials happen in just milliseconds. Also, the sensor had to be very bright so that the signal could be detected with existing microscopy setups. And you need to be able to simultaneously study the multiple neurons that make up a neural network."

The researchers began by optimizing Archaerhodopsin (Arch), a light-sensitive protein from bacteria. In nature, opsins like Arch detect sunlight and initiate the microbes' movement toward the light so that they can begin photosynthesis. However, researchers can also exploit the light-responsive qualities of opsins for a neuroscience method called optogenetics—in which an organism's neurons are genetically modified to express these microbial opsins. Then, by simply shining a light on the modified neurons, the researchers can control the activity of the cells as well as their associated behaviors in the organism.

Gradinaru had previously engineered Arch for better tolerance and performance in mammalian cells as a traditional optogenetic tool used to control an organism's behavior with light. When the modified neurons are exposed to green light, Arch acts as an inhibitor, controlling neuronal activity—and thus the associated behaviors—by preventing the neurons from firing.

However, Gradinaru and Arnold were most interested in another property of Arch: when exposed to red light, the protein acts as a voltage sensor, responding to changes in membrane voltages by producing a flash of light in the presence of an action potential. Although this property could in principle allow Arch to detect the activity of networks of neurons, the light signal marking this neuronal activity was often too dim to see.

To fix this problem, Arnold and her colleagues made the Arch protein brighter using a method called directed evolution—a technique Arnold originally pioneered in the early 1990s. The researchers introduced mutations into the Arch gene, thus encoding millions of variants of the protein. They transferred the mutated genes into E. coli cells, which produced the mutant proteins encoded by the genes. They then screened thousands of the resulting E. coli colonies for the intensities of their fluorescence. The genes for the brightest versions were isolated and subjected to further rounds of mutagenesis and screening until the bacteria produced proteins that were 20 times brighter than the original Arch protein.

A paper describing the process and the bright new protein variants that were created was published in the September 9 issue of the Proceedings of the National Academy of Science.

"This experiment demonstrates how rapidly these remarkable bacterial proteins can evolve in response to new demands. But even more exciting is what they can do in neurons, as Viviana discovered," says Arnold.

In a separate study led by Gradinaru's graduate students Nicholas Flytzanis and Claire Bedbrook, who is also advised by Arnold, the researchers genetically incorporated the new, brighter Arch variants into rodent neurons in culture to see which of these versions was most sensitive to voltage changes—and therefore would be the best at detecting action potentials. One variant, Archer1, was not only bright and sensitive enough to mark action potentials in mammalian neurons in real time, it could also be used to identify which neurons were synaptically connected—and communicating with one another—in a circuit.

The work is described in a study published on September 15 in the journal Nature Communications.

"What was interesting is that we would see two cells over here light up, but not this one over there—because the first two are synaptically connected," Gradinaru says. "This tool gave us a way to observe a network where the perturbation of one cell affects another."

However, sensing activity in a living organism and correlating this activity with behavior remained the biggest challenge. To accomplish this goal Gradinaru's team worked with Paul Sternberg, the Thomas Hunt Morgan Professor of Biology, to test Archer1 as a sensor in a living organism—the tiny nematode worm C. elegans. "There are a few reasons why we used the worms here: they are powerful organisms for quick genetic engineering and their tissues are nearly transparent, making it easy to see the fluorescent protein in a living animal," she says.

After incorporating Archer1 into neurons that were a part of the worm's olfactory system—a primary source of sensory information for C. elegans—the researchers exposed the worm to an odorant. When the odorant was present, a baseline fluorescent signal was seen, and when the odorant was removed, the researchers could see the circuit of neurons light up, meaning that these particular neurons are repressed in the presence of the stimulus and active in the absence of the stimulus. The experiment was the first time that an Arch variant had been used to observe an active circuit in a living organism.

Gradinaru next hopes to use tools like Archer1 to better understand the complex neuronal networks of mammals, using microbial opsins as sensing and actuating tools in optogenetically modified rodents.

"For the future work it's useful that this tool is bifunctional. Although Archer1 acts as a voltage sensor under red light, with green light, it's an inhibitor," she says. "And so now a long-term goal for our optogenetics experiments is to combine the tools with behavior-controlling properties and the tools with voltage-sensing properties. This would allow us to obtain all-optical access to neuronal circuits. But I think there is still a lot of work ahead."

One goal for the future, Gradinaru says, is to make Archer1 even brighter. Although the protein's fluorescence can be seen through the nearly transparent tissues of the nematode worm, opaque organs such as the mammalian brain are still a challenge. More work, she says, will need to be done before Archer1 could be used to detect voltage changes in the neurons of living, behaving mammals.

And that will require further collaborations with protein engineers and biochemists like Arnold.

"As neuroscientists we often encounter experimental barriers, which open the potential for new methods. We then collaborate to generate tools through chemistry or instrumentation, then we validate them and suggest optimizations, and it just keeps going," she says. "There are a few things that we'd like to be better, and through these many iterations and hard work it can happen."

The work published in both papers was supported with grants from the National Institutes of Health (NIH), including an NIH/National Institute of Neurological Disorders and Stroke New Innovator Award to Gradinaru; Beckman Institute funding for the BIONIC center; grants from the U.S. Army Research Office as well as a Caltech Biology Division Training Grant and startup funds from Caltech's President and Provost, and the Division of Biology and Biological Engineering; and other financial support from the Shurl and Kay Curci Foundation and the Life Sciences Research Foundation.

Writer: 
Exclude from News Hub: 
No

Slimy Fish and the Origins of Brain Development

Lamprey—slimy, eel-like parasitic fish with tooth-riddled, jawless sucking mouths—are rather disgusting to look at, but thanks to their important position on the vertebrate family tree, they can offer important insights about the evolutionary history of our own brain development, a recent study suggests.

The work appears in a paper in the September 14 advance online issue of the journal Nature.

"Lamprey are one of the most primitive vertebrates alive on Earth today, and by closely studying their genes and developmental characteristics, researchers can learn more about the evolutionary origins of modern vertebrates—like jawed fishes, frogs, and even humans," says paper coauthor Marianne Bronner, the Albert Billings Ruddock Professor of Biology and director of Caltech's unique Zebrafish/Xenopus/Lamprey facility, where the study was done.

The facility is one of the few places in the world where lampreys can be studied in captivity. Although the parasitic lamprey are an invasive pest in the Great Lakes, they are difficult to study under controlled conditions; their lifecycle takes up to 10 years and they only spawn for a few short weeks in the summer before they die.

Each summer, Bronner and her colleagues receive shipments of wild lamprey from Michigan just before the prime of breeding season. When the lamprey arrive, they are placed in tanks where the temperature of the water is adjusted to extend the breeding season from around three weeks to up to two months. In those extra weeks, the lamprey produce tens of thousands of additional eggs and sperm, which, via in vitro fertilization, generate tens of thousands of additional embryos for study. During this time, scientists from all over the world come to Caltech to perform experiments with the developing lamprey embryos.

In the current study, Bronner and her collaborators—who traveled to Caltech from Stower's Institute for Medical Research in Kansas City, Missouri—studied the origins of the vertebrate hindbrain.

The hindbrain is a part of the central nervous system common to chordates—or organisms that have a nerve cord like our spinal cord. During the development of vertebrates—a subtype of chordates that have backbones—the hindbrain is compartmentalized into eight segments, each of which becomes uniquely patterned to establish networks of neuronal circuits. These segments eventually give rise to adult brain regions like the cerebellum, which is important for motor control, and the medulla oblongata, which is necessary for breathing and other involuntary functions.

However, this segmentation is not present in so-called "invertebrate chordates"—a grouping of chordates that lack a backbone, such as sea squirts and lancelets.

"The interesting thing about lampreys is that they occupy an intermediate evolutionary position between the invertebrate chordates and the jawed vertebrates," says Hugo Parker, a postdoc at Stower's Institute and first author on the study. "By investigating aspects of lamprey embryology, we can get a picture of how vertebrate traits might have evolved."

In the vertebrates, segmental patterning genes called Hox genes help to determine the animal's head-to-tail body plan—and those same Hox genes also control the segmentation of the hindbrain. Although invertebrate chordates also have Hox genes, these animals don't have segmented hindbrains. Because lampreys are centered between these two types of organisms on the evolutionary tree, the researchers wanted to know whether or not Hox genes are involved in patterning of the lamprey hindbrain.

To their surprise, the researchers discovered that the lamprey hindbrain was not only segmented during development but the process also involved Hox genes—just like in its jawed vertebrate cousins.

"When we started, we thought that the situation was different, and the Hox genes were not really integrated into the process of segmentation as they are in jawed vertebrates," Parker says. "But in actually doing this project, we discovered the way that lamprey Hox genes are expressed and regulated is very similar to what we see in jawed vertebrates." This means that hindbrain segmentation—and the role of Hox genes in this segmentation—happened earlier on in evolution than was once thought, he says.

Parker, who has been spending his summers at Caltech studying lampreys since 2008, is next hoping to pinpoint other aspects of the lamprey hindbrain that may be conserved in modern vertebrates—information that will help contribute to a fundamental understanding of vertebrate development. And although those investigations will probably mean following the lamprey for a few more summers at Caltech, Parker says his time in the lamprey facility continually offers a one-of-a-kind experience.

"The lamprey system here is unique in the world—and it's not just the water tanks and how we've learned to maintain the animals. It's the small nucleus of people who have particular skills, people who come in from all over the world to work together, share protocols, and develop the field together," he says. "That's one of the things I've liked ever since I first came here. I really felt like I was a part of something very special.

These results were published in a paper titled "A Hox regulatory network of hindbrain segmentation is conserved to the base of vertebrates." Robb Krumlauf, a scientific director at the Stower's Institute and professor at the Kansas University Medical Center, was also a coauthor on the study. The Zebrafish/Xenopus/Lamprey facility at Caltech is a Beckman Institute facility.

Writer: 
Exclude from News Hub: 
No

Ceramics Don't Have To Be Brittle

Caltech Materials Scientists Are Creating Materials By Design

Imagine a balloon that could float without using any lighter-than-air gas. Instead, it could simply have all of its air sucked out while maintaining its filled shape. Such a vacuum balloon, which could help ease the world's current shortage of helium, can only be made if a new material existed that was strong enough to sustain the pressure generated by forcing out all that air while still being lightweight and flexible.

Caltech materials scientist Julia Greer and her colleagues are on the path to developing such a material and many others that possess unheard-of combinations of properties. For example, they might create a material that is thermally insulating but also extremely lightweight, or one that is simultaneously strong, lightweight, and nonbreakable—properties that are generally thought to be mutually exclusive.

Greer's team has developed a method for constructing new structural materials by taking advantage of the unusual properties that solids can have at the nanometer scale, where features are measured in billionths of meters. In a paper published in the September 12 issue of the journal Science, the Caltech researchers explain how they used the method to produce a ceramic (e.g., a piece of chalk or a brick) that contains about 99.9 percent air yet is incredibly strong, and that can recover its original shape after being smashed by more than 50 percent.

"Ceramics have always been thought to be heavy and brittle," says Greer, a professor of materials science and mechanics in the Division of Engineering and Applied Science at Caltech. "We're showing that in fact, they don't have to be either. This very clearly demonstrates that if you use the concept of the nanoscale to create structures and then use those nanostructures like LEGO to construct larger materials, you can obtain nearly any set of properties you want. You can create materials by design."

The researchers use a direct laser writing method called two-photon lithography to "write" a three-dimensional pattern in a polymer by allowing a laser beam to crosslink and harden the polymer wherever it is focused. The parts of the polymer that were exposed to the laser remain intact while the rest is dissolved away, revealing a three-dimensional scaffold. That structure can then be coated with a thin layer of just about any kind of material—a metal, an alloy, a glass, a semiconductor, etc. Then the researchers use another method to etch out the polymer from within the structure, leaving a hollow architecture.

The applications of this technique are practically limitless, Greer says. Since pretty much any material can be deposited on the scaffolds, the method could be particularly useful for applications in optics, energy efficiency, and biomedicine. For example, it could be used to reproduce complex structures such as bone, producing a scaffold out of biocompatible materials on which cells could proliferate.

In the latest work, Greer and her students used the technique to produce what they call three-dimensional nanolattices that are formed by a repeating nanoscale pattern. After the patterning step, they coated the polymer scaffold with a ceramic called alumina (i.e., aluminum oxide), producing hollow-tube alumina structures with walls ranging in thickness from 5 to 60 nanometers and tubes from 450 to 1,380 nanometers in diameter.

Greer's team next wanted to test the mechanical properties of the various nanolattices they created. Using two different devices for poking and prodding materials on the nanoscale, they squished, stretched, and otherwise tried to deform the samples to see how they held up.

They found that the alumina structures with a wall thickness of 50 nanometers and a tube diameter of about 1 micron shattered when compressed. That was not surprising given that ceramics, especially those that are porous, are brittle. However, compressing lattices with a lower ratio of wall thickness to tube diameter—where the wall thickness was only 10 nanometers—produced a very different result.

"You deform it, and all of a sudden, it springs back," Greer says. "In some cases, we were able to deform these samples by as much as 85 percent, and they could still recover."

To understand why, consider that most brittle materials such as ceramics, silicon, and glass shatter because they are filled with flaws—imperfections such as small voids and inclusions. The more perfect the material, the less likely you are to find a weak spot where it will fail. Therefore, the researchers hypothesize, when you reduce these structures down to the point where individual walls are only 10 nanometers thick, both the number of flaws and the size of any flaws are kept to a minimum, making the whole structure much less likely to fail.

"One of the benefits of using nanolattices is that you significantly improve the quality of the material because you're using such small dimensions," Greer says. "It's basically as close to an ideal material as you can get, and you get the added benefit of needing only a very small amount of material in making them."

The Greer lab is now aggressively pursuing various ways of scaling up the production of these so-called meta-materials.

The lead author on the paper, "Strong, Lightweight and Recoverable Three-Dimensional Ceramic Nanolattices," is Lucas R. Meza, a graduate student in Greer's lab. Satyajit Das, who was a visiting student researcher at Caltech, is also a coauthor. The work was supported by funding from the Defense Advanced Research Projects Agency and the Institute for Collaborative Biotechnologies. Greer is also on the board of directors of the Kavli Nanoscience Institute at Caltech.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Tipping the Balance of Behavior

Humans with autism often show a reduced frequency of social interactions and an increased tendency to engage in repetitive solitary behaviors. Autism has also been linked to dysfunction of the amygdala, a brain structure involved in processing emotions. Now Caltech researchers have discovered antagonistic neuron populations in the mouse amygdala that control whether the animal engages in social behaviors or asocial repetitive self-grooming. This discovery may have implications for understanding neural circuit dysfunctions that underlie autism in humans.

This discovery, which is like a "seesaw circuit," was led by postdoctoral scholar Weizhe Hong in the laboratory of David J. Anderson, the Seymour Benzer Professor of Biology at Caltech and an investigator with the Howard Hughes Medical Institute. The work was published online on September 11 in the journal Cell

"We know that there is some hierarchy of behaviors, and they interact with each other because the animal can't exhibit both social and asocial behaviors at the same time. In this study, we wanted to figure out how the brain does that," Anderson says.

Anderson and his colleagues discovered two intermingled but distinct populations of neurons in the amygdala, a part of the brain that is involved in innate social behaviors. One population promotes social behaviors, such as mating, fighting, or social grooming, while the other population controls repetitive self-grooming—an asocial behavior.

Interestingly, these two populations are distinguished according to the most fundamental subdivision of neuron subtypes in the brain: the "social neurons" are inhibitory neurons (which release the neurotransmitter GABA, or gamma-aminobutyric acid), while the "self-grooming neurons" are excitatory neurons (which release the neurotransmitter glutamate, an amino acid).

To study the relationship between these two cell types and their associated behaviors, the researchers used a technique called optogenetics. In optogenetics, neurons are genetically altered so that they express light-sensitive proteins from microbial organisms. Then, by shining a light on these modified neurons via a tiny fiber optic cable inserted into the brain, researchers can control the activity of the cells as well as their associated behaviors.

Using this optogenetic approach, Anderson's team was able to selectively switch on the neurons associated with social behaviors and those linked with asocial behaviors.

With the social neurons, the behavior that was elicited depended upon the intensity of the light signal. That is, when high-intensity light was used, the mice became aggressive in the presence of an intruder mouse. When lower-intensity light was used, the mice no longer attacked, although they were still socially engaged with the intruder—either initiating mating behavior or attempting to engage in social grooming.

When the neurons associated with asocial behavior were turned on, the mouse began self-grooming behaviors such as paw licking and face grooming while completely ignoring all intruders. The self-grooming behavior was repetitive and lasted for minutes even after the light was turned off.

The researchers could also use the light-activated neurons to stop the mice from engaging in particular behaviors. For example, if a lone mouse began spontaneously self-grooming, the researchers could halt this behavior through the optogenetic activation of the social neurons. Once the light was turned off and the activation stopped, the mouse would return to its self-grooming behavior.

Surprisingly, these two groups of neurons appear to interfere with each other's function: the activation of social neurons inhibits self-grooming behavior, while the activation of self-grooming neurons inhibits social behavior. Thus these two groups of neurons seem to function like a seesaw, one that controls whether mice interact with others or instead focus on themselves. It was completely unexpected that the two groups of neurons could be distinguished by whether they were excitatory or inhibitory. "If there was ever an experiment that 'carves nature at its joints,'" says Anderson, "this is it."

This seesaw circuit, Anderson and his colleagues say, may have some relevance to human behavioral disorders such as autism.

"In autism," Anderson says, "there is a decrease in social interactions, and there is often an increase in repetitive, sometimes asocial or self-oriented, behaviors"—a phenomenon known as perseveration. "Here, by stimulating a particular set of neurons, we are both inhibiting social interactions and promoting these perseverative, persistent behaviors."

Studies from other laboratories have shown that disruptions in genes implicated in autism show a similar decrease in social interaction and increase in repetitive self-grooming behavior in mice, Anderson says. However, the current study helps to provide a needed link between gene activity, brain activity, and social behaviors, "and if you don't understand the circuitry, you are never going to understand how the gene mutation affects the behavior." Going forward, he says, such a complete understanding will be necessary for the development of future therapies.

But could this concept ever actually be used to modify a human behavior?

"All of this is very far away, but if you found the right population of neurons, it might be possible to override the genetic component of a behavioral disorder like autism, by just changing the activity of the circuits—tipping the balance of the see-saw in the other direction," he says.

The work was funded by the Simons Foundation, the National Institutes of Health and the Howard Hughes Medical Institute. Caltech coauthors on the paper include Hong, who was the lead author, and graduate student Dong-Wook Kim.

Writer: 
Exclude from News Hub: 
No

Textbook Theory Behind Volcanoes May Be Wrong

In the typical textbook picture, volcanoes, such as those that are forming the Hawaiian islands, erupt when magma gushes out as narrow jets from deep inside Earth. But that picture is wrong, according to a new study from researchers at Caltech and the University of Miami in Florida.

New seismology data are now confirming that such narrow jets don't actually exist, says Don Anderson, the Eleanor and John R. McMillian Professor of Geophysics, Emeritus, at Caltech. In fact, he adds, basic physics doesn't support the presence of these jets, called mantle plumes, and the new results corroborate those fundamental ideas.

"Mantle plumes have never had a sound physical or logical basis," Anderson says. "They are akin to Rudyard Kipling's 'Just So Stories' about how giraffes got their long necks."

Anderson and James Natland, a professor emeritus of marine geology and geophysics at the University of Miami, describe their analysis online in the September 8 issue of the Proceedings of the National Academy of Sciences.

According to current mantle-plume theory, Anderson explains, heat from Earth's core somehow generates narrow jets of hot magma that gush through the mantle and to the surface. The jets act as pipes that transfer heat from the core, and how exactly they're created isn't clear, he says. But they have been assumed to exist, originating near where the Earth's core meets the mantle, almost 3,000 kilometers underground—nearly halfway to the planet's center. The jets are theorized to be no more than about 300 kilometers wide, and when they reach the surface, they produce hot spots.  

While the top of the mantle is a sort of fluid sludge, the uppermost layer is rigid rock, broken up into plates that float on the magma-bearing layers. Magma from the mantle beneath the plates bursts through the plate to create volcanoes. As the plates drift across the hot spots, a chain of volcanoes forms—such as the island chains of Hawaii and Samoa.

"Much of solid-Earth science for the past 20 years—and large amounts of money—have been spent looking for elusive narrow mantle plumes that wind their way upward through the mantle," Anderson says.

To look for the hypothetical plumes, researchers analyze global seismic activity. Everything from big quakes to tiny tremors sends seismic waves echoing through Earth's interior. The type of material that the waves pass through influences the properties of those waves, such as their speeds. By measuring those waves using hundreds of seismic stations installed on the surface, near places such as Hawaii, Iceland, and Yellowstone National Park, researchers can deduce whether there are narrow mantle plumes or whether volcanoes are simply created from magma that's absorbed in the sponge-like shallower mantle.

No one has been able to detect the predicted narrow plumes, although the evidence has not been conclusive. The jets could have simply been too thin to be seen, Anderson says. Very broad features beneath the surface have been interpreted as plumes or super-plumes, but, still, they're far too wide to be considered narrow jets.

But now, thanks in part to more seismic stations spaced closer together and improved theory, analysis of the planet's seismology is good enough to confirm that there are no narrow mantle plumes, Anderson and Natland say. Instead, data reveal that there are large, slow, upward-moving chunks of mantle a thousand kilometers wide.

In the mantle-plume theory, Anderson explains, the heat that is transferred upward via jets is balanced by the slower downward motion of cooled, broad, uniform chunks of mantle. The behavior is similar to that of a lava lamp, in which blobs of wax are heated from below and then rise before cooling and falling. But a fundamental problem with this picture is that lava lamps require electricity, he says, and that is an outside energy source that an isolated planet like Earth does not have.  

The new measurements suggest that what is really happening is just the opposite: Instead of narrow jets, there are broad upwellings, which are balanced by narrow channels of sinking material called slabs. What is driving this motion is not heat from the core, but cooling at Earth's surface. In fact, Anderson says, the behavior is the regular mantle convection first proposed more than a century ago by Lord Kelvin. When material in the planet's crust cools, it sinks, displacing material deeper in the mantle and forcing it upward.

"What's new is incredibly simple: upwellings in the mantle are thousands of kilometers across," Anderson says. The formation of volcanoes then follows from plate tectonics—the theory of how Earth's plates move and behave. Magma, which is less dense than the surrounding mantle, rises until it reaches the bottom of the plates or fissures that run through them. Stresses in the plates, cracks, and other tectonic forces can squeeze the magma out, like how water is squeezed out of a sponge. That magma then erupts out of the surface as volcanoes. The magma comes from within the upper 200 kilometers of the mantle and not thousands of kilometers deep, as the mantle-plume theory suggests.

"This is a simple demonstration that volcanoes are the result of normal broad-scale convection and plate tectonics," Anderson says. He calls this theory "top-down tectonics," based on Kelvin's initial principles of mantle convection. In this picture, the engine behind Earth's interior processes is not heat from the core but cooling at the planet's surface. This cooling and plate tectonics drives mantle convection, the cooling of the core, and Earth's magnetic field. Volcanoes and cracks in the plate are simply side effects.

The results also have an important consequence for rock compositions—notably the ratios of certain isotopes, Natland says. According to the mantle-plume idea, the measured compositions derive from the mixing of material from reservoirs separated by thousands of kilometers in the upper and lower mantle. But if there are no mantle plumes, then all of that mixing must have happened within the upwellings and nearby mantle in Earth's top 1,000 kilometers.

The paper is titled "Mantle updrafts and mechanisms of oceanic volcanism."

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Seeing Protein Synthesis in the Field

Caltech researchers have developed a novel way to visualize proteins generated by microorganisms in their natural environment—including the murky waters of Caltech's lily pond, as in this image created by Professor of Geobiology Victoria Orphan and her colleagues. The method could give scientists insights to how uncultured microbes (organisms that may not easily be grown in the lab) react and adapt to environmental stimuli over space and time.

The visualization technique, dubbed BONCAT (for "bioorthogonal non-canonical amino-acid tagging"), was developed by David Tirrell, Caltech's Ross McCollum–William H. Corcoran Professor and professor of chemistry and chemical engineering. BONCAT uses "non-canonical" amino acids—synthetic molecules that do not normally occur in proteins found in nature and that carry particular chemical tags that can attach (or "click") onto a fluorescent dye. When these artificial amino acids are incubated with environmental samples, like lily-pond water, they are taken up by microorganisms and incorporated into newly formed proteins. Adding the fluorescent dye to the mix allows these proteins to be visualized within the cell.

For example, in the image, the entire microbial community in the pond water is stained blue with a DNA dye; freshwater gammaproteobacteria are labeled with a fluorescently tagged short-chain ribosomal RNA probe, in red; and newly created proteins are dyed green by BONCAT. The cells colored green and orange in the composite image, then, show those bacteria—gammaproteobacteria and other rod-shaped cells—that are actively making proteins.

"You could apply BONCAT to almost any type of sample," Orphan says. "When you have an environmental sample, you don't know which microorganisms are active. So, assume you're interested in looking at organisms that respond to methane. You could take a sample, provide methane, add the synthetic amino acid, and ask which cells over time showed activity—made new proteins—in the presence of methane relative to samples without methane. Then you can start to sort those organisms out, and possibly use this to determine protein turnover times. These questions are not typically tractable with uncultured organisms in the environment." Orphan's lab is also now using BONCAT on samples of deep-sea sediment in which mixed groups of bacteria and archaea catalyze the anaerobic oxidation of methane.

Why sample the Caltech lily pond? Roland Hatzenpichler, a postdoctoral scholar in Orphan's lab, explains: "When I started applying BONCAT on environmental samples, I wanted to try this new approach on samples that are both interesting from a microbiological standpoint, as well as easily accessible. Samples from the lily pond fit those criteria." Hatzenpichler is lead author of a study describing BONCAT that appeared as the cover story of the August issue of the journal Environmental Microbiology.

The work is supported by the Gordon and Betty Moore Foundation Marine Microbiology Initiative.

Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news