Caltech Rocket Experiment Finds Surprising Cosmic Light

Using an experiment carried into space on a NASA suborbital rocket, astronomers at Caltech and their colleagues have detected a diffuse cosmic glow that appears to represent more light than that produced by known galaxies in the universe.

The researchers, including Caltech Professor of Physics Jamie Bock and Caltech Senior Postdoctoral Fellow Michael Zemcov, say that the best explanation is that the cosmic light—described in a paper published November 7 in the journal Science—originates from stars that were stripped away from their parent galaxies and flung out into space as those galaxies collided and merged with other galaxies.

The discovery suggests that many such previously undetected stars permeate what had been thought to be dark spaces between galaxies, forming an interconnected sea of stars. "Measuring such large fluctuations surprised us, but we carried out many tests to show the results are reliable," says Zemcov, who led the study.

Although they cannot be seen individually, "the total light produced by these stray stars is about equal to the background light we get from counting up individual galaxies," says Bock, also a senior research scientist at JPL. Bock is the principal investigator of the rocket project, called the Cosmic Infrared Background Experiment, or CIBER, which originated at Caltech and flew on four rocket flights from 2009 through 2013.

In earlier studies, NASA's Spitzer Space Telescope, which sees the universe at longer wavelengths, had observed a splotchy pattern of infrared light called the cosmic infrared background. The splotches are much bigger than individual galaxies. "We are measuring structures that are grand on a cosmic scale," says Zemcov, "and these sizes are associated with galaxies bunching together on a large-scale pattern." Initially some researchers proposed that this light came from the very first galaxies to form and ignite stars after the Big Bang. Others, however, have argued the light originated from stars stripped from galaxies in more recent times.

CIBER was designed to help settle the debate. "CIBER was born as a conversation with Asantha Cooray, a theoretical cosmologist at UC Irvine and at the time a postdoc at Caltech with [former professor] Marc Kamionkowski," Bock explains. "Asantha developed an idea for studying galaxies by measuring their large-scale structure. Galaxies form in dark-matter halos, which are over-dense regions initially seeded in the early universe by inflation. Furthermore, galaxies not only start out in these halos, they tend to cluster together as well. Asantha had the brilliant idea to measure this large-scale structure directly from maps. Experimentally, it is much easier for us to make a map by taking a wide-field picture with a small camera, than going through and measuring faint galaxies one by one with a large telescope." 

Cooray originally developed this approach for the longer infrared wavelengths observed by the European Space Agency's Herschel Space Observatory. "With its 3.5-meter diameter mirror, Herschel is too small to count up all the galaxies that make the infrared background light, so he instead obtained this information from the spatial structure in the map," Bock says. 

"Meanwhile, I had been working on near-infrared rocket experiments, and was interested in new ways to use this unique idea to study the extragalactic background," he says. The extragalactic infrared background represents all of the infrared light from all of the sources in the universe, "and there were some hints we didn't know where it was all coming from."

In other words, if you calculate the light produced by individual galaxies, you would find they made less than the background light. "One could try and measure the total sky brightness directly," Bock says, "but the problem is that the foreground 'Zodiacal light,' due to dust in the solar system reflecting light from the sun, is so bright that it is hard to subtract with enough accuracy to measure the extragalactic background. So we put these two ideas together, applying Asantha's mapping approach to new wavelengths, and decided that the best way to get at the extragalactic background was to measure spatial fluctuations on angular scales around a degree. That led to CIBER."

The CIBER experiment consists of three instruments, including two spectrometers to determine the brightness of Zodiacal light and measure the cosmic infrared background directly. The measurements in the recent publication are made with two wide-field cameras to search for fluctuations in two wavelengths of near infrared light. Earth's upper atmosphere glows brightly at the CIBER wavelengths. But the measurements can be done in space—avoiding that glow—in just the short amount of time that a suborbital rocket flies above the atmosphere, before descending again back toward the planet.

CIBER flew four missions in all; the paper includes results from the second and third of CIBER's flights, launched in 2010 and 2012 from White Sands Missile Range in New Mexico and recovered afterward by parachute. In the flights, the researchers observed the same part of the sky at a different time of year, and swapped the detector arrays as a crosscheck against data artifacts created by the sensors. "This series of flights was quite helpful in developing complete confidence in the results," says Zemcov. "For the final flight, we decided to get more time above the atmosphere and went with a non-recovered flight into the Atlantic Ocean on a four-stage rocket." (The data from the fourth flight will be discussed in a future paper.)

Based on data from these two launches, the researchers found fluctuations, but they had to go through a careful process to identify and remove local sources, such as the instrument, as well as emissions from the solar system, stars, scattered starlight in the Milky Way, and known galaxies. What is left behind is a splotchy pattern representing fluctuations in the remaining infrared background light. Comparing data from multiple rocket launches, they saw the identical signal. That signal also is observed by comparing CIBER and Spitzer images of the same region of sky. Finally, the team measured the color of the fluctuations by comparing the CIBER results to Spitzer measurements at longer wavelengths. The result is a spectrum with a very blue color, brightest in the CIBER bands.

"CIBER tells us a couple key facts," Zemcov explains. "The fluctuations seem to be too bright to be coming from the first galaxies. You have to burn a large quantity of hydrogen into helium to get that much light, then you have to hide the evidence, because we don't see enough heavy elements made by stellar nucleosynthesis"—the process, occurring within stars, by which heavier elements are created from the fusion of lighter ones—"which means these elements would have to disappear into black holes." 

"The color is also too blue," he says. "First galaxies should appear redder due to their light being absorbed by hydrogen, and we do not see any evidence for such an absorption feature."

In short, Zemcov says, "although we designed our experiment to search for emission from first stars and galaxies, that explanation doesn't fit our data very well. The best interpretation is that we are seeing light from stars outside of galaxies but in the same dark matter halos. The stars have been stripped from their parent galaxies by gravitational interactions—which we know happens from images of interacting galaxies—and flung out to large distances."

The model, Bock admits, "isn't perfect. In fact, the color still isn't quite blue enough to match the data. But even so, the brightness of the fluctuations implies this signal is important in a cosmological sense, as we are tracing a large amount of cosmic light production." 

Future experiments could test whether stray stars are indeed the source of the infrared cosmic glow, the researchers say. If the stars were tossed out from their parent galaxies, they should still be located in the same vicinity. The CIBER team is working on better measurements using more infrared colors to learn how the stripping of stars happened over cosmic history.

In addition to Bock, Zemcov, and Cooray, other coauthors of the paper, "On the Origin of Near-Infrared Extragalactic Background Light Anisotropy," are Joseph Smidt of Los Alamos National Laboratory; Toshiaki Arai, Toshio Matsumoto, Shuji Matsuura, and Takehiko Wada of the Japan Aerospace Exploration Agency; Yan Gong of UC Irvine; Min Gyu Kim of Seoul National University; Phillip Korngut, a postdoctoral scholar at Caltech; Anson Lam of UCLA; Dae Hee Lee and Uk Won Nam of the Korea Astronomy and Space Science Institute (KASI); Gael Roudier of JPL; and Kohji Tsumura of Tohoku University. The work was supported by NASA, with initial support provided by JPL's Director's Research and Development Fund. Japanese participation in CIBER was supported by the Japan Society for the Promotion of Science and the Ministry of Education, Culture, Sports, Science and Technology. Korean participation in CIBER was supported by KASI. 

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Figuring Out How We Get the Nitrogen We Need

Caltech Chemists Image Nitrogenase's Active Site At Work

Nitrogen is an essential component of all living systems, playing important roles in everything from proteins and nucleic acids to vitamins. It is the most abundant element in Earth's atmosphere and is literally all around us, but in its gaseous state, N2,, it is inert and useless to most organisms. Something has to convert, or "fix," that nitrogen into a metabolically usable form, such as ammonia. Until about 100 years ago when an industrial-scale technique called the Haber-Bosch process was developed, bacteria were almost wholly responsible for all nitrogen fixation on Earth (lightning and volcanoes fix a small amount of nitrogen). Bacteria accomplish this important chemical conversion using an enzyme called nitrogenase.

"For decades, we have been trying to understand how nitrogenase can interact with this inert gas and carry out this transformation," says Doug Rees, Caltech's Roscoe Gilkey Dickinson Professor of Chemistry and an investigator with the Howard Hughes Medical Institute (HHMI). To fix nitrogen in the laboratory, the Haber-Bosch process requires extremely high temperatures and pressures, yet bacteria are able to complete the conversion under physiological conditions. "We'd love to understand how they do this," he says. "It's a great chemical mystery."

But cracking that mystery has proven extremely difficult using standard chemical techniques. We know that the enzyme is made up of two proteins, the molybdenum iron (MoFe-) protein and the iron (Fe-) protein, which are both required for nitrogen fixation. We also know that the MoFe-protein consists of two metal centers and that one of those is the FeMo-cofactor (also known as  "the cofactor") at the active site, where the nitrogen binds and the chemical transformation takes place.

In 1992, Rees and his graduate student, Jongsun Kim (PhD '93), were the first to determine the structure of the MoFe-protein using X-ray crystallography.

"I think that there was a feeling that once you solved the structure, you'd understand how it worked," Rees says. "What we can say 22 years later is that was certainly not the case."

The dream would be to have atmospheric nitrogen bind to the FeMo-cofactor and to stop time so that chemists could sneak a peak at the chemical structure of the protein at that intermediate point. Since it is not possible to freeze time and because the reaction proceeds too quickly to study by standard crystallographic methods, researchers have come up with an alternative. Chemists have been trying to get carbon monoxide, an inhibitor that halts the enzyme's activity but also closely mimics the structure and electronic makeup of N2, to bind to the cofactor and to then crystallize the product relatively quickly so that the structure can be analyzed using X-ray crystallography.

Unfortunately, the cofactor has stubbornly refused to cooperate. "We've demonstrated more times than we'd like that the form of this protein as isolated doesn't bind substrates," explains Rees. "Usually if you want to know how something binds to a protein, you just add it to your protein and study the crystal structure with X-ray crystallography. But we just couldn't get anything bound to this cofactor."

But in order for the cofactor to exist in a form that would bind to a substrate or an inhibitor, several other conditions must be met—for example, the Fe-protein has to be there. In addition, ATP—a molecule that provides energy for many life processes—must be present, along with yet another enzyme system that regenerates the ATP consumed in the reaction and a source of electrons. So although the aim in crystallography is typically to isolate a purified protein, the chemists had to muddy their samples by adding all these other needed components.

After joining Rees's group as a postdoctoral scholar in 2012, Thomas Spatzal spent months working on this problem, tweaking the method he used for trying to get the carbon monoxide to bind to the cofactor and for crystallizing the product. He adjusted parameters such as the protein concentrations, the temperature under which the samples were prepared, and the amount of time he allowed for the crystals to form. Every week, he sent a new set of crystals, frozen with liquid nitrogen, to be analyzed on an X-ray beamline at the Stanford Synchrotron Radiation Lightsource (SSRL) constructed as part of Caltech's Molecular Observatory with support from the Gordon and Betty Moore Foundation. And every week he worked up the data that came back and looked to see if any of the carbon monoxide bound to the active site.

"People have been seeing the resting state of the active site, where nothing was bound, for years," Spatzal says. "It's always the same thing. It never looks any different."

But on a recent Friday morning, Spatzal processed the latest batch of data, and lo and behold, he finally saw what he had been looking for.

"There was a moment where I looked at it and said, 'Hold on. Something looks different there,'" says Spatzal. "I wondered, 'Am I crazy?' You just don't expect it at first."

What he saw was a first—a crystal structure revealing carbon monoxide bound to the FeMo-cofactor. Spatzal, Rees, and their colleagues describe that structure and their methodology in the September 26 issue of the journal Science.

Spatzal figured out a way to optimize the crystallization process by using tiny crystal seeds to accelerate the rate of crystal growth and conducting all manipulations in the presence of carbon monoxide, allowing him to grow nice crystals of the MoFe-protein and then to see where the carbon monoxide was bound to the cofactor.

What he found was surprising. The carbon monoxide took the place of one of the sulfur atoms in the cofactor's original structure, bridging two of its iron atoms. Many people had expected that the carbon monoxide would bind differently, so that it would stick out, adding extra density to the structure. But because it displaced the sulfur, the cofactor only took on a slightly different arrangement of atoms.

In addition, Spatzal showed that when the carbon monoxide is removed, the sulfur can reattach, reactivating the cofactor so that it can once again fix nitrogen.

"As astonishing as this structure was—that the carbon monoxide replaced the sulfur—I think it's even more astonishing that Thomas was able to establish that the cofactor could be reactivated," Rees says. "I don't think anyone had imagined that you would get this sort of rearrangement of the cofactor as part of the interaction."

"You could imagine that if you put an inhibitor on a system, it could damage the metal center and inactivate the protein so that it would no longer do its job. The fact that we can get it back into an active state means that it's not permanently damaged, and that has physiological meaning in terms of how nitrogen fixation occurs in nature," says Spatzal.

The researchers note that this result would still be a long way off without the X-ray crystallography resources of Caltech's Molecular Observatory, which has abundant dedicated time on a beamline at SSRL. "We were really fortunate that the Moore Foundation funded this access to the beamline," says Rees. "That was really essential for this project because it took a lot of optimization to work everything out. We were able to keep regularly sending samples and right away get feedback about how things were working. It's an unbelievable resource."

Additional Caltech authors on the paper, "Ligand binding to the FeMo-cofactor: Structures of CO-bound and reactivated nitrogenase," are Kathryn A. Perez, a graduate student, and James Howard, a visiting associate who is also affiliated with the University of Minnesota and where Rees was a postdoc. Oliver Einsle of the Institut fur Biochemie in Freiburg, Germany, and the Albert-Ludwigs-Universität Freiburg, was a postdoc with Rees as well as Spatzal's thesis advisor and is a coauthor on the paper. Spatzal is an associate with HHMI.

This work was supported by grants from the National Institutes of Health, Deutsche Forschungsgemeinschaft, and the European Research Council N-ABLE project. The Molecular Observatory is supported by the Gordon and Betty Moore Foundation, the Beckman Institute, and the Sanofi-Aventis Bioengineering Research Program at Caltech. Microbiology research at Caltech is supported by the Center for Environmental Microbial Interactions

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Sweeping Air Devices For Greener Planes

The large amount of jet fuel required to fly an airplane from point A to point B can have negative impacts on the environment and—as higher fuel costs contribute to rising ticket prices—a traveler's wallet. With funding from NASA and the Boeing Company, engineers from the Division of Engineering and Applied Science at Caltech and their collaborators from the University of Arizona have developed a device that lets planes fly with much smaller tails, reducing the planes' overall size and weight, thus increasing fuel efficiency.

On October 8, the researchers—including Emilio Graff, research project manager in aerospace at Caltech and a leader on the project—were presented with a NASA Group Achievement Award "for exceptional achievement executing a full-scale wind-tunnel test, proving the flight feasibility of active flow control."

An airplane's tail forms a critical part of the control system that helps steer the plane during flying. During flight, air rushes around the vertical tail with great force and is deflected by the tail's rudder—a moveable flap at the rear of the tail that can steer the plane by angling air to the left or right. By moving the rudder left or right, a pilot can move the air in one direction or the other, helping to keep the plane flying straight during a strong crosswind.

During the high speeds of flight, the air flow around the tail is so strong that the rudder can control the plane's path with minimal movement. However, during the lower speeds of takeoff and landing, larger rudder deflections are required to maneuver the plane. And in the case of engine failure in a multiengine airplane, the vertical tail must generate enough force to keep the plane going straight by turning "against" the working engine. Airplane manufacturers deal with this challenge by fitting planes with very large vertical tails that can deflect enough air and generate enough force to control the plane—even at low speeds.

"But this means that the planes have a tail that's too big 99 percent of the time," says Emilio Graff, research project manager in aerospace at Caltech and a leader on the project, "because you only need a tail that big if you lose an engine during takeoff or landing. Imagine if the only way you could have airbags in your car was to tow them in a big trailer behind your car, just in case there was an accident. It ends up sucking up a lot of fuel."

The system—designed by Graff and his colleagues in the laboratory of Mory Gharib, Hans W. Liepmann Professor of Aeronautics and Bioinspired Engineering—would allow airplanes to be designed with smaller tails by helping to increase the tail's steering effect at low speeds. The work was done in collaboration with Israel Wygnanski, a professor at the University of Arizona.

In their new approach, the researchers installed air-blowing devices called sweeping jet actuators under the outer skin of the tail along the tail's vertical length. The sweeping jet actuators deliver a strong, steady burst of sweeping air just along the rudder, equivalent to the amount of airflow that would normally be encountered by the tail and rudder at higher speeds. The engineers hypothesized that with the sweeping jets turned on, a smaller tail and rudder could straighten the path of the airplane, even at low speeds.

Graff says that, using these devices, airplane manufacturers could reduce the size of airplane tails by 20 percent, only needing to activate the sweeping jet actuators during the low speeds of takeoff and landing. "That means that most of the time when you're flying around normally, you're saving gas because you have a smaller, lighter tail. So even if this system itself uses a lot of energy, it's only on in emergencies," he says. "When you take off or land, the air jets will be on—just in case an engine fails. But on a 12-hour flight, if you're only using the system for 30 minutes, you're still saving gas during 11 hours and 30 minutes."

The fuel savings come not only from reduced drag due to the smaller size, but also from weight savings and structural advantages from having a shorter tail, Graff adds.

The researchers first tested this hypothesis in the approximately five-by-six-foot Lucas Wind Tunnel at Caltech, recording the effect of sweeping jet actuators on a small model—only 15 percent of the size of an actual airplane tail. Because the jets of air created by the device move back and forth, "sweeping" the air over the length of the tail rather than blasting a single, linear burst of air, the researchers discovered that they could increase air flow over the entire tail with just six of the sweeping jets. On the small-scale model, these six jets boosted the effectiveness of the rudder by over 20 percent.

Upon seeing the favorable results from this preliminary experiment, and as part of NASA's Environmentally Responsible Aviation program, Graff and his colleagues designed the system to test the effects of sweeping jet actuators on a full-sized airliner tail. However, since such tails are nearly 27 feet tall, the engineers had to move this stage of their experiment off campus, to the National Full-Scale Aerodynamics Complex at Moffett Field, California—home of the world's two largest wind tunnels.

After machining sized-up sweeping jet actuators at Caltech, the multi-institutional team, which also included engineers from Boeing Research and Technology and NASA's Langley Research Center, installed the devices on a refurbished Boeing 757 tail, found at an airplane parts salvage yard. The large wind tunnel allowed the researchers to simulate wind conditions that realistically would be experienced during takeoff and landing. Data from the full-scale test confirmed that sweeping jet actuators could sufficiently increase the air flow around the rudder to steer the plane in the event of an engine failure.

The technique used by sweeping jet actuators—called flow control—is not new; it has previously been used for quick takeoffs and landings in military applications, Graff says. But those existing systems are not energy-efficient, he adds, "and if you need a third engine to power the system, then you may as well use it to fly the plane." The system designed by Graff and his colleagues is small and efficient enough to be powered by an airliner's auxiliary power unit—the engine that powers the cabin's air conditioning and lights at the gate. "We were able to prove that a system like this can work at the scale of a commercial airliner, without having to add an extra engine," Graff says.

For the next phase of the project, collaborators at Boeing will test the sweeping jet actuators on their Boeing ecoDemonstrator 757, a plane used for testing innovations that could improve the environmental performance of their aircraft.

These findings could one day help Boeing and other manufacturers produce "greener" planes. However, Graff notes, there are still kinks to work out—for example, as currently designed, the sweeping jets could be noisy for passengers—and the adoption of any new features on an aircraft can be a lengthy process. But once adopted, the payoffs could be huge—and improving the tail is not the only goal, Graff says.

"This is only the beginning. The tail is a 'low risk' surface; modifying it puts engineers at ease compared to, for example, modifying wings," he says. "But the data shows that similar systems could be applied to wings to increase the cruise speed of airplanes and allow some maneuvers to be achieved without moving parts.

"I would be surprised if this ends up in the next line of airplanes—since the new planes are already probably years into the design stage—but some version of this device could be adopted in the near future," he says. And the researchers estimate that if all commercial airplanes were fitted with this device and used it for one year, the fuel savings would be the equivalent of taking a year's worth of traffic off of Southern California's notoriously crowded 405 freeway—a worthy goal.

The sweeping jet actuator was developed as part of NASA's Environmentally Responsible Aviation (ERA) project, which aims to reduce the impact of aviation on the environment.

Contact: 
Writer: 
Exclude from News Hub: 
No

Getting To Know Super-Earths

"If you have a coin and flip it just once, what does that tell you about the odds of heads versus tails?" asks Heather Knutson, assistant professor of planetary science at Caltech. "It tells you almost nothing. It's the same with planetary systems," she says.

For as long as astronomers have been looking to the skies, we have had just one planetary system—our own—to study in depth. That means we have only gotten to know a handful of possible outcomes of the planet formation process, and we cannot say much about whether the features observed in our solar system are common or rare when compared to planetary systems orbiting other stars.

That is beginning to change. NASA's Kepler spacecraft, which launched on a planet-hunting mission in 2009, searched one small patch of the sky and identified more than 4,000 candidate exoplanets—worlds orbiting stars other than our own sun. It was the first survey to provide a definitive look at the relative frequency of planets as a function of size. That is, to ask, 'How common are gas giant planets, like Jupiter, compared to planets that look a lot more like Earth?'

Kepler's results suggest that small planets are much more common than big ones. Interestingly, the most common planets are those that are just a bit larger than Earth but smaller than Neptune—the so-called super-Earths.

However, despite being common in our local corner of the galaxy, there are no examples of super-Earths in our own solar system. Our current observations tell us something about the sizes and orbits of these newly discovered worlds, but we have very little insight into their compositions.

"We are left with this situation where super-Earths appear to be the most common kind of exoplanet in the galaxy, but we don't know what they're made of," says Knutson.

There are a number of possibilities. A super-Earth could be just that: a bigger version of Earth—mostly rocky, with an atmosphere. Then again, it could be a mini-Neptune, with a large rock-ice core encapsulated in a thick envelope of hydrogen and helium. Or it could be a water world—a rocky core enveloped in a blanket of water and perhaps an atmosphere composed of steam (depending on the temperature of the planet).

"It's really interesting to think about these planets because they could have so many different compositions, and knowing their composition will tell us a lot about how planets form," Knutson says. For example, because planets in this size range acquire most of their mass by pulling in and incorporating solid material, water worlds initially must have formed far away from their parent stars, where temperatures were cold enough for water to freeze. Most of the super-Earths known today orbit very close to their host stars. If water-dominated super-Earths turn out to be common, it would indicate that most of these worlds did not form in their present locations but instead migrated in from more distant orbits.

In addition to thinking about exoplanets, Knutson and her students use space-based observatories like the Hubble and Spitzer Space Telescopes to learn more about the distant worlds. For example, the researchers analyze the starlight that filters through a planet's atmosphere as it passes in front of its star to learn about the composition of the atmosphere. Molecular species present in the planet's atmosphere absorb light at particular wavelengths. Therefore, by using Hubble and Spitzer to view the planet and its atmosphere at a number of different wavelengths, the researchers can determine which chemical compounds are present.

To date, nearly two dozen planets have been characterized with this technique. These observations have shown that the enormous gas giant exoplanets known as hot-Jupiters have water, carbon monoxide, hydrogen, helium—and potentially carbon dioxide and methane—in their atmospheres.

However, right now super-Earths are the hot topic. Unfortunately, although hundreds of super-Earths have been found, only a few are close enough and orbiting bright enough stars for astronomers to study in this way using currently available telescopes.

The first super-Earth that the astronomical community targeted for atmospheric studies was GJ 1214b, in the constellation Ophiuchus. Based on its average density (determined from its mass and radius), it was clear from the start that the planet was not entirely rocky. However, its density could be equally well matched by either a primarily water composition or a Neptune-like composition with a rocky core surrounded by a thick gas envelope. Information about the atmosphere could help astronomers determine which one it was: a mini-Neptune's atmosphere should contain lots of molecular hydrogen, while a water world's atmosphere should be water dominated.

GJ 1214b has been a popular target for the Hubble Space Telescope since its discovery in 2009. Disappointingly, after a first Hubble campaign led by researchers at the Harvard-Smithsonian Center for Astrophysics, the spectrum came back featureless—there were no chemical signatures in the atmosphere. After a second set of more sensitive observations led by researchers at the University of Chicago returned the same result, it became clear that a high cloud deck must be masking the signature of absorption from the planet's atmosphere.

"It's exciting to know that there are clouds on the planet, but the clouds are getting in the way of what we actually wanted to know, which is what is this super-Earth made of?" explains Knutson.

Now Knutson's team has studied a second super-Earth: HD 97658b, in the constellation Leo. They report their findings in the current issue of The Astrophysical Journal. The researchers used Hubble to measure the decrease in light when the planet passed in front of its parent star over a range of infrared wavelengths in order to detect small changes caused by water vapor in the planet's atmosphere.

However, again the data came back featureless. One explanation is that HD 97658b is also enveloped in clouds. However, Knutson says, it is also possible that the planet has an atmosphere that is lacking hydrogen. Because such an atmosphere could be very compact, it would make the telltale fingerprints of water vapor and other molecules very small and hard to detect. "Our data are not precise enough to tell whether it's clouds or the absence of hydrogen in the atmosphere that's causing the spectrum to be flat," she says. "This was just a quick first look to give us a rough idea of what the atmosphere looked like. Over the next year, we will use Hubble to observe this planet again in more detail. We hope those observations will provide a clear answer to the current mystery."

It appears that clouds are going to continue to pose a real challenge in studies of super-Earths, so Knutson and other researchers are working to understand the composition of the clouds around these planets and the conditions under which they form. The hope is that they will get to the point where they can predict which worlds will be shrouded in clouds. "If we can then target planets that we think should be cloud-free, that will help us make optimal use of Hubble's time," she says.

Looking to the future, Knutson says there is only one more known super-Earth that can be targeted for atmospheric studies with current telescopes. But new surveys, such as NASA's extended Kepler K2 mission and the Transiting Exoplanet Survey Satellite (TESS), slated for launch in 2017, should identify a large sample of new targets.

Of course, she says, astronomers would love to study exoplanets the size of Earth, but these worlds are just a bit too small and too difficult to observe with Hubble and Spitzer. NASA's James Webb Space Telescope, which is scheduled for launch in 2018, will provide the first opportunity to study more Earth-like worlds. "Super-Earths are at the edge of what we can study right now," Knutson says. "But super-Earths are a good consolation prize—they're interesting in their own right, and they give us a chance to explore new kinds of worlds with no analog in our own solar system."

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Rock-Dwelling Microbes Remove Methane from Deep Sea

Methane-breathing microbes that inhabit rocky mounds on the seafloor could be preventing large volumes of the potent greenhouse gas from entering the oceans and reaching the atmosphere, according to a new study by Caltech researchers.

The rock-dwelling microbes, which are detailed in the Oct. 14 issue of Nature Communications, represent a previously unrecognized biological sink for methane and as a result could reshape scientists' understanding of where this greenhouse gas is being consumed in subseafloor habitats, says Professor of Geobiology Victoria Orphan, who led the study.

"Methane is a much more powerful greenhouse gas than carbon dioxide, so tracing its flow through the environment is really a priority for climate models and for understanding the carbon cycle," Orphan says.

Orphan's team has been studying methane-breathing marine microorganisms for nearly 20 years. The microbes they focus on survive without oxygen, relying instead on sulfate ions present in seawater for their energy needs. Previous work by Orphan's team helped show that the methane-breathing system is actually made up of two different kinds of microorganisms that work closely with one another. One of the partners, dubbed "ANME" for "ANaerobic MEthanotrophs," belongs to a type of ancient single-celled creatures called the archaea.

Through a mechanism that is still unclear, ANME work closely with bacteria to consume methane using sulfate from seawater. "Without this biological process, much of that methane would enter the water column, and the escape rates into the atmosphere would probably be quite a bit higher," says study first author Jeffrey Marlow, a geobiology graduate student in Orphan's lab.

Until now, however, the activity of ANME and their bacterial partners had been primarily studied in sediments located in cold seeps, areas on the ocean bottom where methane is escaping from subseafloor sources into the water above. The new study marks the first time they have been observed to oxidize methane inside carbonate mounds, huge rocky outcroppings of calcium carbonate that can rise hundreds of feet above the seafloor.

If the microbes are living inside the mounds themselves, then the distribution of methane consumption is significantly different from what was previously thought. "Methane-derived carbonates represent a large volume within many seep systems, and finding active methane-consuming archaea and bacteria in the interior of these carbonate rocks extends the known habitat for methane-consuming microorganisms beyond the relatively thin layer of sediment that may overlay a carbonate mound," Marlow says.

Orphan and her team detected evidence of methane-breathing microbes in carbonate rocks collected from three cold seeps around the world: one at a tectonic plate boundary near Costa Rica; another in the Eel River basin off the coast of northwestern California; and at Hydrate Ridge, off the Oregon coast. The team used manned and robotic submersibles to collect the rock samples from depths ranging from 2,000 feet to nearly half a mile below the surface.

Marlow has vivid memories of being a passenger in the submersible Alvin during one of those rock-retrieval missions. "As you sink down, the water outside your window goes from bright blue surface water to darker turquoise and navy blue and all these shades of blue that you didn't know existed until it gets completely dark," Marlow recalls. "And then you start seeing flashes of light because the vehicle is perturbing the water column and exciting florescent organisms. When you finally get to the seafloor, Alvin's exterior lights turn on, and this crazy alien world is illuminated in front of you."

The carbonate mounds that the subs visited often serve as foundations for coral and sponges, and are home to rockfishes, clams, crabs, and other aquatic life. For their study, the team members gathered rock samples not only from carbonate mounds located within active cold seeps, where methane could be seen escaping from the seafloor into the water, but also from mounds that appeared to be dormant.

Once the carbonate rocks were collected, they were transported back to the surface and rushed into a cold room aboard a research ship. In the cold room, which was maintained at the temperature of the deep sea, the team cracked open the carbonates in order to gather material from their interiors. "We wanted to make sure we weren't just sampling material from the surface of the rocks," Marlow says.

Using a microscope, the team confirmed that ANME and sulfate-reducing bacterial cells were indeed present inside the carbonate rocks, and genetic analysis of their DNA showed that they were related to methanotrophs that had previously been characterized in seafloor sediment. The scientists also used a technique that involved radiolabeled 14C-methane tracer gas to quantify the rates of methane consumption in the carbonate rocks and sediments from both the actively seeping sites and the areas appearing to be inactive. They found that the rock-dwelling methanotrophs consumed methane at a slower rate than their sediment-dwelling cousins.

"The carbonate-based microbes breathed methane at roughly one-third the rate of those gathered from sediments near active seep sites," Marlow says. "However, because there are likely many more microbes living in carbonate mounds than in sediments, their contributions to methane removal from the environment may be more significant."

The rock samples that were harvested near supposedly dormant cold seeps also harbored microbial communities capable of consuming methane. "We were surprised to find that these marine microorganisms are still viable and, if exposed to methane, can continue to oxidize this greenhouse gas long after surface expressions of seepage have vanished." Orphan says.

Along with Orphan and Marlow, additional coauthors on the paper, "Carbonate-hosted methanotrophy represents an unrecognized methane sink in the deep sea," include former Caltech associate research scientist Joshua Steele, now at the Southern California Coastal Water Research Project; Wiebke Ziebis, an associate professor at the University of Southern California; Andrew Thurber, an assistant professor at Oregon State University; and Lisa Levin, a professor at the Scripps Institution of Oceanography. Funding for the study was provided by the National Science Foundation; NASA's Astrobiology Institute; the Gordon and Betty Moore Foundation Marine Microbiology Initiative grant; and the National Research Council of the National Academies. 

Written by Ker Than

Writer: 
Ker Than
Exclude from News Hub: 
No

NuSTAR Discovers Impossibly Bright Dead Star

X-ray source in the Cigar Galaxy is the first ultraluminous pulsar ever detected

Astronomers working with NASA's Nuclear Spectroscopic Telescope Array (NuSTAR), led by Caltech's Fiona Harrison, have found a pulsating dead star beaming with the energy of about 10 million suns. The object, previously thought to be a black hole because it is so powerful, is in fact a pulsar—the incredibly dense rotating remains of a star.

"This compact little stellar remnant is a real powerhouse. We've never seen anything quite like it," says Harrison, NuSTAR's principal investigator and the Benjamin M. Rosen Professor of Physics at Caltech. "We all thought an object with that much energy had to be a black hole."

Dom Walton, a postdoctoral scholar at Caltech who works with NuSTAR data, says that with its extreme energy, this pulsar takes the top prize in the weirdness category. Pulsars are typically between one and two times the mass of the sun. This new pulsar presumably falls in that same range but shines about 100 times brighter than theory suggests something of its mass should be able to.

"We've never seen a pulsar even close to being this bright," Walton says. "Honestly, we don't know how this happens, and theorists will be chewing on it for a long time." Besides being weird, the finding will help scientists better understand a class of very bright X-ray sources, called ultraluminous X-ray sources (ULXs).

Harrison, Walton, and their colleagues describe NuSTAR's detection of this first ultraluminous pulsar in a paper that appears in the current issue of Nature.

"This was certainly an unexpected discovery," says Harrison. "In fact, we were looking for something else entirely when we found this."

Earlier this year, astronomers in London detected a spectacular, once-in-a-century supernova (dubbed SN2014J) in a relatively nearby galaxy known as Messier 82 (M82), or the Cigar Galaxy, 12 million light-years away. Because of the rarity of that event, telescopes around the world and in space adjusted their gaze to study the aftermath of the explosion in detail.


This animation shows a neutron star—the core of a star that exploded in a massive supernova. This particular neutron star is known as a pulsar because it sends out rotating beams of X-rays that sweep past Earth like lighthouse beacons. (Credit: NASA/JPL-Caltech)

Besides the supernova, M82 harbors a number of other ULXs. When Matteo Bachetti of the Université de Toulouse in France, the lead author of this new paper, took a closer look at these ULXs in NuSTAR's data, he discovered that something in the galaxy was pulsing, or flashing light.

"That was a big surprise," Harrison says. "For decades everybody has thought these ultraluminous X-ray sources had to be black holes. But black holes don't have a way to create this pulsing."

But pulsars do. They are like giant magnets that emit radiation from their magnetic poles. As they rotate, an outside observer with an X-ray telescope, situated at the right angle, would see flashes of powerful light as the beam swept periodically across the observer's field of view, like a lighthouse beacon.

The reason most astronomers had assumed black holes were powering ULXs is that these X-ray sources are so incredibly bright. Black holes can be anywhere from 10 to billions of times the mass of the sun, making their gravitational tug much stronger than that of a pulsar. As matter falls onto the black hole the gravitational energy turns it to heat, which creates X-ray light. The bigger the black hole, the more energy there is to make the object shine.

Surprised to see the flashes coming from M82, the NuSTAR team checked and rechecked the data. The flashes were really there, with a pulse showing up every 1.37 seconds.

The next step was to figure out which X-ray source was producing the flashes. Walton and several other Caltech researchers analyzed the data from NuSTAR and a second NASA X-ray telescope, Chandra, to rule out about 25 different X-ray sources, finally settling on a ULX known as M82X-2 as the source of the flashes.

With the pulsar and its location within M82 identified, there are still many questions left to answer. It is many times higher than the Eddington limit, a basic physics guideline that sets an upper limit on the brightness that an object of a given mass should be able to achieve.

"This is the most extreme violation of that limit that we've ever seen," says Walton. "We have known that things can go above that by a small amount, but this blows that limit away."

NuSTAR is particularly well-suited to make discoveries like this one. Not only does the space telescope see high-energy X-rays, but it sees them in a unique way. Rather than snapping images the way that your cell-phone camera does—by integrating the light such that images blur if you move—NuSTAR detects individual particles of X-ray light and marks when they are measured. That allows the team to do timing analyses and, in this case, to see that the light from the ULX was coming in pulses.

Now that the NuSTAR team has shown that this ULX is a pulsar, Harrison points out that many other known ULXs may in fact be pulsars as well. "Everybody had assumed all of these sources were black holes," she says. "Now I think people have to go back to the drawing board and decide whether that's really true. This could just be a very unique, strange object, or it could be that they're not that uncommon. We just don't know. We need more observations to see if other ULXs are pulsing."

Along with Harrison and Walton, additional Caltech authors on the paper, "An Ultraluminous X-ray Source Powered by An Accreting Neutron Star," are postdoctoral scholars Felix Fürst, and Shriharsh Tendulkar; research scientists Brian W. Grefenstette and Vikram Rana; and Shri Kulkarni, the John D. and Catherine T. MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories. The work was supported by NASA and made use of data supplied by the UK Swift Science Data Centre at the University of Leicester.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Swimming Sea-Monkeys Reveal How Zooplankton May Help Drive Ocean Circulation

Brine shrimp, which are sold as pets known as Sea-Monkeys, are tiny—only about half an inch long each. With about 10 small leaf-like fins that flap about, they look as if they could hardly make waves.

But get billions of similarly tiny organisms together and they can move oceans.

It turns out that the collective swimming motion of Sea-Monkeys and other zooplankton—swimming plankton—can generate enough swirling flow to potentially influence the circulation of water in oceans, according to a new study by Caltech researchers.

The effect could be as strong as those due to the wind and tides, the main factors that are known to drive the up-and-down mixing of oceans, says John Dabiri, professor of aeronautics and bioengineering at Caltech. According to the new analysis by Dabiri and mechanical engineering graduate student Monica Wilhelmus, organisms like brine shrimp, despite their diminutive size, may play a significant role in stirring up nutrients, heat, and salt in the sea—major components of the ocean system.

In 2009, Dabiri's research team studied jellyfish to show that small animals can generate flow in the surrounding water. "Now," Dabiri says, "these new lab experiments show that similar effects can occur in organisms that are much smaller but also more numerous—and therefore potentially more impactful in regions of the ocean important for climate."

The researchers describe their findings in the journal Physics of Fluids.

Brine shrimp (specifically Artemia salina) can be found in toy stores, as part of kits that allow you to raise a colony at home. But in nature, they live in bodies of salty water, such as the Great Salt Lake in Utah. Their behavior is cued by light: at night, they swim toward the surface to munch on photosynthesizing algae while avoiding predators. During the day, they sink back into the dark depths of the water.


A. salina (a species of brine shrimp, commonly known as Sea-Monkeys) begin a vertical migration, stimulated by a vertical blue laser light.

To study this behavior in the laboratory, Dabiri and Wilhelmus use a combination of blue and green lasers to induce the shrimp to migrate upward inside a big tank of water. The green laser at the top of the tank provides a bright target for the shrimp to swim toward while a blue laser rising along the side of the tank lights up a path to guide them upward.

The tank water is filled with tiny, silver-coated hollow glass spheres 13 microns wide (about one-half of one-thousandth of an inch). By tracking the motion of those spheres with a high-speed camera and a red laser that is invisible to the organisms, the researchers can measure how the shrimp's swimming causes the surrounding water to swirl.

Although researchers had proposed the idea that swimming zooplankton can influence ocean circulation, the effect had never been directly observed, Dabiri says. Past studies could only analyze how individual organisms disturb the water surrounding them.

But thanks to this new laser-guided setup, Dabiri and Wilhelmus have been able to determine that the collective motion of the shrimp creates powerful swirls—stronger than would be produced by simply adding up the effects produced by individual organisms.

Adding up the effect of all of the zooplankton in the ocean—assuming they have a similar influence—could inject as much as a trillion watts of power into the oceans to drive global circulation, Dabiri says. In comparison, the winds and tides contribute a combined two trillion watts.

Using this new experimental setup will enable future studies to better untangle the complex relationships between swimming organisms and ocean currents, Dabiri says. "Coaxing Sea-Monkeys to swim when and where you want them to is even more difficult than it sounds," he adds. "But Monica was undeterred over the course of this project and found a creative solution to a very challenging problem."

The title of the Physics of Fluids paper is "Observations of large-scale fluid transport by laser-guided plankton aggregations." The research was supported by the U.S.-Israel Binational Science Foundation, the Office of Naval Research, and the National Science Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Variability Keeps The Body In Balance

Although the heart beats out a very familiar "lub-dub" pattern that speeds up or slows down as our activity increases or decreases, the pattern itself isn't as regular as you might think. In fact, the amount of time between heartbeats can vary even at a "constant" heart rate—and that variability, doctors have found, is a good thing.

Reduced heart rate variability (HRV) has been found to be predictive of a number of illnesses, such as congestive heart failure and inflammation. For athletes, a drop in HRV has also been linked to fatigue and overtraining. However, the underlying physiological mechanisms that control HRV—and exactly why this variation is important for good health—are still a bit of a mystery.

By combining heart rate data from real athletes with a branch of mathematics called control theory, a collaborative team of physicians and Caltech researchers from the Division of Engineering and Applied Science have now devised a way to better understand the relationship between HRV and health—a step that could soon inform better monitoring technologies for athletes and medical professionals.

The work was published in the August 19 print issue of the Proceedings of the National Academy of Sciences.

To run smoothly, complex systems, such as computer networks, cars, and even the human body, rely upon give-and-take connections and relationships among a large number of variables; if one variable must remain stable to maintain a healthy system, another variable must be able to flex to maintain that stability. Because it would be too difficult to map each individual variable, the mathematics and software tools used in control theory allow engineers to summarize the ups and downs in a system and pinpoint the source of a possible problem.

Researchers who study control theory are increasingly discovering that these concepts can also be extremely useful in studies of the human body. In order for a body to work optimally, it must operate in an environment of stability called homeostasis. When the body experiences stress—for example, from exercise or extreme temperatures—it can maintain a stable blood pressure and constant body temperature in part by dialing the heart rate up or down. And HRV plays an important role in maintaining this balance, says study author John Doyle, the Jean-Lou Chameau Professor of Control and Dynamical Systems, Electrical Engineering, and Bioengineering.

"A familiar related problem is in driving," Doyle says. "To get to a destination despite varying weather and traffic conditions, any driver—even a robotic one—will change factors such as acceleration, braking, steering, and wipers. If these factors suddenly became frozen and unchangeable while the car was still moving, it would be a nearly certain predictor that a crash was imminent. Similarly, loss of heart rate variability predicts some kind of malfunction or 'crash,' often before there are any other indications," he says.

To study how HRV helps maintain this version of "cruise control" in the human body, Doyle and his colleagues measured the heart rate, respiration rate, oxygen consumption, and carbon dioxide generation of five healthy young athletes as they completed experimental exercise routines on stationary bicycles.

By combining the data from these experiments with standard models of the physiological control mechanisms in the human body, the researchers were able to determine the essential tradeoffs that are necessary for athletes to produce enough power to maintain an exercise workload while also maintaining the internal homeostasis of their vital signs.

"For example, the heart, lungs, and circulation must deliver sufficient oxygenated blood to the muscles and other organs while not raising blood pressure so much as to damage the brain," Doyle says. "This is done in concert with control of blood vessel dilation in the muscles and brain, and control of breathing. As the physical demands of the exercise change, the muscles must produce fluctuating power outputs, and the heart, blood vessels, and lungs must then respond to keep blood pressure and oxygenation within narrow ranges."

Once these trade-offs were defined, the researchers then used control theory to analyze the exercise data and found that a healthy heart must maintain certain patterns of variability during exercise to keep this complicated system in balance.  Loss of this variability is a precursor of fatigue, the stress induced by exercise. Today, some HRV monitors in the clinic can let a doctor know when variability is high or low, but they provide little in the way of an actionable diagnosis.

Because monitors in hospitals can already provide HRV levels and dozens of other signals and readings, the integration of such mathematical analyses of control theory into HRV monitors could, in the future, provide a way to link a drop in HRV to a more specific and treatable diagnosis. In fact, one of Doyle's students has used an HRV application of control theory to better interpret traditional EKG signals.

Control theory could also be incorporated into the HRV monitors used by athletes to prevent fatigue and injury from overtraining, he says.

"Physicians who work in very data-intensive settings like the operating room or ICU are in urgent need of ways to rapidly and acutely interpret the data deluge," says Marie Csete, MD (PhD, '00), chief scientific officer at the Huntington Medical Research Institutes and a coauthor on the paper. "We hope this work is a first step in a larger research program that helps physicians make better use of data to care for patients."

This study is not the first to apply control theory in medicine. Control theory has already informed the design of a wearable artificial pancreas for type 1 diabetic patients and an automated prototype device that controls the administration of anesthetics during surgery. Nor will it be the last, says Doyle, whose sights are next set on using control theory to understand the progression of cancer.

"We have a new approach, similarly based on control of networks, that organizes and integrates a bunch of new ideas floating around about the role of healthy stroma—non-tumor cells present in tumors—in promoting cancer progression," he says.

"Based on discussions with Dr. Peter Lee at City of Hope [a cancer research and treatment center], we now understand that the non-tumor cells interact with the immune system and with chemotherapeutic drugs to modulate disease progression," Doyle says. "And I'm hoping there's a similar story there, where thinking rigorously about the tradeoffs in development, regeneration, inflammation, wound healing, and cancer will lead to new insights and ultimately new therapies."

Other Caltech coauthors on the study include former graduate students Na Li (PhD '13) now an assistant professor at Harvard; Somayeh Sojoudi (PhD '12), currently at NYU; and graduate students Chenghao Simon Chien and Jerry Cruz. Other collaborators on the study were Benjamin Recht, a former postdoctoral scholar in Doyle's lab and now an assistant professor at UC Berkeley; Daniel Bahmiller, a clinician training in public health; and David Stone, MD, an expert in ICU medicine from the University of Virginia School of Medicine.

Writer: 
Exclude from News Hub: 
No

A New Way to Prevent the Spread of Devastating Diseases

For decades, researchers have tried to develop broadly effective vaccines to prevent the spread of illnesses such as HIV, malaria, and tuberculosis. While limited progress has been made along these lines, there are still no licensed vaccinations available that can protect most people from these devastating diseases.

So what are immunologists to do when vaccines just aren't working?

At Caltech, Nobel Laureate David Baltimore and his colleagues have approached the problem in a different way. Whereas vaccines introduce substances such as antigens into the body hoping to illicit an appropriate immune response—the generation of either antibodies that might block an infection or T cells capable of attacking infected cells—the Caltech team thought: Why not provide the body with step-by-step instructions for producing specific antibodies that have been shown to neutralize a particular disease?

The method they developed—originally to trigger an immune response to HIV—is called vectored immunoprophylaxis, or VIP. The technique was so successful that it has since been applied to a number of other infectious diseases, including influenza, malaria, and hepatitis C.

"It is enormously gratifying to us that this technique can have potentially widespread use for the most difficult diseases that are faced particularly by the less developed world," says Baltimore, president emeritus and the Robert Andrews Millikan Professor of Biology at Caltech.

VIP relies on the prior identification of one or more antibodies that are able to prevent infection in laboratory tests by a wide range of isolated samples of a particular pathogen. Once that has been done, researchers can incorporate the genes that encode those antibodies into an adeno-associated virus (AAV), a small, harmless virus that has been useful in gene-therapy trials. When the AAV is injected into muscle tissue, the genes instruct the muscle tissue to generate the specified antibodies, which can then enter the circulation and protect against infection.

In 2011, the Baltimore group reported in Nature that they had used the technique to deliver antibodies that effectively protected mice from HIV infection. Alejandro Balazs was lead author on that paper and was a postdoctoral scholar in the Baltimore lab at the time.

"We expected that at some dose, the antibodies would fail to protect the mice, but it never did—even when we gave mice 100 times more HIV than would be needed to infect seven out of eight mice," said Balazs, now at the Ragon Institute of MGH, MIT and Harvard. "All of the exposures in this work were significantly larger than a human being would be likely to encounter."

At the time, the researchers noted that the leap from mice to humans is large but said they were encouraged by the high levels of antibodies the mice were able to produce after a single injection and how effectively the mice were protected from HIV infection for months on end. Baltimore's team is now working with a manufacturer to produce the materials needed for human clinical trials that will be conducted by the Vaccine Research Center at the National Institutes of Health.

Moving on from HIV, the Baltimore lab's next goal was protection against influenza A. Although reasonably effective influenza vaccines exist, each year more than 20,000 deaths, on average, are the result of seasonal flu epidemics in the United States. We are encouraged to get flu shots every fall because the influenza virus is something of a moving target—it evolves to avoid resistance. There are also many different strains of influenza A (e.g. H1N1 and H3N2), each incorporating a different combination of the various forms of the proteins hemagglutinin (H) and neuraminidase (N). To chase this target, the vaccine is reformulated each year, but sometimes it fails to prevent the spread of the strains that are prevalent that year.

But about five years ago, researchers began identifying a new class of anti-influenza antibodies that are able to prevent infection by many, many strains of the virus. Instead of binding to the head of the influenza virus, as most flu-fighting antibodies do, these new antibodies target the stalk that holds up the head. And while the head is highly adaptable—meaning that even when mutations occur there, the virus can often remain functional—the stalk must basically remain the same in order for the virus to survive. So these stalk antibodies are very hard for the virus to mutate against.

In 2013, the Baltimore group stitched the genes for two of these new antibodies into an AAV and showed that mice injected with the vector were protected against multiple flu strains, including all H1, H2, and H5 influenza strains tested. This was even true of older mice and those without a properly functioning immune system—a particularly important finding considering that most deaths from the flu occur in the elderly and immunocompromised populations. The group reported its results in the journal Nature Biotechnology.

"We have shown that we can protect mice completely against flu using a kind of antibody that doesn't need to be changed every year," says Baltimore. "It is important to note that this has not been tested in humans, so we do not yet know what concentration of antibody can be produced by VIP in humans. However, if it works as well as it does in mice, VIP may provide a plausible approach to protect even the most vulnerable patients against epidemic and pandemic influenza."

Now that the Baltimore lab has shown VIP to be so effective, other groups from around the country have adopted the Caltech-developed technique to try to ward off malaria, hepatitis C, and tuberculosis.

In August, a team led by Johns Hopkins Bloomberg School of Public Health reported in the Proceedings of the National Academy of Sciences (PNAS) that as many as 70 percent of mice that they had injected by the VIP procedure were protected from infection with malaria by Plasmodium falciparum, the parasite that carries the most lethal of the four types of the disease. A subset of mice in the study produced particularly high levels of the disease-fighting antibodies. In those mice, the immunization was 100 percent effective.

"This is also just a first-generation antibody," says Baltimore, who was a coauthor on the PNAS study. "Knowing now that you can get this kind of protection, it's worth trying to get much better antibodies, and I trust that people in the malaria field will do that."

Most recently, a group led by researchers from The Rockefeller University showed that three hepatitis-C-fighting antibodies delivered using VIP were able to protect mice efficiently from the virus. The results were published in the September 17 issue of the journal Science Translational Medicine. The researchers also found that the treatment was able to temporarily clear the virus from mice that had already been infected. Additional work is needed to determine how to prevent the disease from relapsing. Interestingly, though, the work suggests that the antibodies that are effective against hepatitis C, once it has taken root in the liver, may work by protecting uninfected liver cells from infection while allowing already infected cells to be cleared from the body.    

An additional project is currently evaluating the use of VIP for the prevention of tuberculosis—a particular challenge given the lack of proven tuberculosis-neutralizing antibodies.

"When we started this work, we imagined that it might be possible to use VIP to fight other diseases, so it has been very exciting to see other groups adopting the technique for that purpose," Baltimore says. "If we can get positive clinical results in humans with HIV, we think that would really encourage people to think about using VIP for these other diseases."

Baltimore's work is supported by funding from the National Institute of Allergy and Infectious Disease, the Bill and Melinda Gates Foundation, the Caltech-UCLA Joint Center for Translational Medicine, and a Caltech Translational Innovation Partnership Award.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Sensing Neuronal Activity With Light

For years, neuroscientists have been trying to develop tools that would allow them to clearly view the brain's circuitry in action—from the first moment a neuron fires to the resulting behavior in a whole organism. To get this complete picture, neuroscientists are working to develop a range of new tools to study the brain. Researchers at Caltech have developed one such tool that provides a new way of mapping neural networks in a living organism.

The work—a collaboration between Viviana Gradinaru (BS '05), assistant professor of biology and biological engineering, and Frances Arnold, the Dick and Barbara Dickinson Professor of Chemical Engineering, Bioengineering and Biochemistry—was described in two separate papers published this month.

When a neuron is at rest, channels and pumps in the cell membrane maintain a cell-specific balance of positively and negatively charged ions within and outside of the cell resulting in a steady membrane voltage called the cell's resting potential. However, if a stimulus is detected—for example, a scent or a sound—ions flood through newly open channels causing a change in membrane voltage. This voltage change is often manifested as an action potential—the neuronal impulse that sets circuit activity into motion.

The tool developed by Gradinaru and Arnold detects and serves as a marker of these voltage changes.

"Our overarching goal for this tool was to achieve sensing of neuronal activity with light rather than traditional electrophysiology, but this goal had a few prerequisites," Gradinaru says. "The sensor had to be fast, since action potentials happen in just milliseconds. Also, the sensor had to be very bright so that the signal could be detected with existing microscopy setups. And you need to be able to simultaneously study the multiple neurons that make up a neural network."

The researchers began by optimizing Archaerhodopsin (Arch), a light-sensitive protein from bacteria. In nature, opsins like Arch detect sunlight and initiate the microbes' movement toward the light so that they can begin photosynthesis. However, researchers can also exploit the light-responsive qualities of opsins for a neuroscience method called optogenetics—in which an organism's neurons are genetically modified to express these microbial opsins. Then, by simply shining a light on the modified neurons, the researchers can control the activity of the cells as well as their associated behaviors in the organism.

Gradinaru had previously engineered Arch for better tolerance and performance in mammalian cells as a traditional optogenetic tool used to control an organism's behavior with light. When the modified neurons are exposed to green light, Arch acts as an inhibitor, controlling neuronal activity—and thus the associated behaviors—by preventing the neurons from firing.

However, Gradinaru and Arnold were most interested in another property of Arch: when exposed to red light, the protein acts as a voltage sensor, responding to changes in membrane voltages by producing a flash of light in the presence of an action potential. Although this property could in principle allow Arch to detect the activity of networks of neurons, the light signal marking this neuronal activity was often too dim to see.

To fix this problem, Arnold and her colleagues made the Arch protein brighter using a method called directed evolution—a technique Arnold originally pioneered in the early 1990s. The researchers introduced mutations into the Arch gene, thus encoding millions of variants of the protein. They transferred the mutated genes into E. coli cells, which produced the mutant proteins encoded by the genes. They then screened thousands of the resulting E. coli colonies for the intensities of their fluorescence. The genes for the brightest versions were isolated and subjected to further rounds of mutagenesis and screening until the bacteria produced proteins that were 20 times brighter than the original Arch protein.

A paper describing the process and the bright new protein variants that were created was published in the September 9 issue of the Proceedings of the National Academy of Science.

"This experiment demonstrates how rapidly these remarkable bacterial proteins can evolve in response to new demands. But even more exciting is what they can do in neurons, as Viviana discovered," says Arnold.

In a separate study led by Gradinaru's graduate students Nicholas Flytzanis and Claire Bedbrook, who is also advised by Arnold, the researchers genetically incorporated the new, brighter Arch variants into rodent neurons in culture to see which of these versions was most sensitive to voltage changes—and therefore would be the best at detecting action potentials. One variant, Archer1, was not only bright and sensitive enough to mark action potentials in mammalian neurons in real time, it could also be used to identify which neurons were synaptically connected—and communicating with one another—in a circuit.

The work is described in a study published on September 15 in the journal Nature Communications.

"What was interesting is that we would see two cells over here light up, but not this one over there—because the first two are synaptically connected," Gradinaru says. "This tool gave us a way to observe a network where the perturbation of one cell affects another."

However, sensing activity in a living organism and correlating this activity with behavior remained the biggest challenge. To accomplish this goal Gradinaru's team worked with Paul Sternberg, the Thomas Hunt Morgan Professor of Biology, to test Archer1 as a sensor in a living organism—the tiny nematode worm C. elegans. "There are a few reasons why we used the worms here: they are powerful organisms for quick genetic engineering and their tissues are nearly transparent, making it easy to see the fluorescent protein in a living animal," she says.

After incorporating Archer1 into neurons that were a part of the worm's olfactory system—a primary source of sensory information for C. elegans—the researchers exposed the worm to an odorant. When the odorant was present, a baseline fluorescent signal was seen, and when the odorant was removed, the researchers could see the circuit of neurons light up, meaning that these particular neurons are repressed in the presence of the stimulus and active in the absence of the stimulus. The experiment was the first time that an Arch variant had been used to observe an active circuit in a living organism.

Gradinaru next hopes to use tools like Archer1 to better understand the complex neuronal networks of mammals, using microbial opsins as sensing and actuating tools in optogenetically modified rodents.

"For the future work it's useful that this tool is bifunctional. Although Archer1 acts as a voltage sensor under red light, with green light, it's an inhibitor," she says. "And so now a long-term goal for our optogenetics experiments is to combine the tools with behavior-controlling properties and the tools with voltage-sensing properties. This would allow us to obtain all-optical access to neuronal circuits. But I think there is still a lot of work ahead."

One goal for the future, Gradinaru says, is to make Archer1 even brighter. Although the protein's fluorescence can be seen through the nearly transparent tissues of the nematode worm, opaque organs such as the mammalian brain are still a challenge. More work, she says, will need to be done before Archer1 could be used to detect voltage changes in the neurons of living, behaving mammals.

And that will require further collaborations with protein engineers and biochemists like Arnold.

"As neuroscientists we often encounter experimental barriers, which open the potential for new methods. We then collaborate to generate tools through chemistry or instrumentation, then we validate them and suggest optimizations, and it just keeps going," she says. "There are a few things that we'd like to be better, and through these many iterations and hard work it can happen."

The work published in both papers was supported with grants from the National Institutes of Health (NIH), including an NIH/National Institute of Neurological Disorders and Stroke New Innovator Award to Gradinaru; Beckman Institute funding for the BIONIC center; grants from the U.S. Army Research Office as well as a Caltech Biology Division Training Grant and startup funds from Caltech's President and Provost, and the Division of Biology and Biological Engineering; and other financial support from the Shurl and Kay Curci Foundation and the Life Sciences Research Foundation.

Writer: 
Exclude from News Hub: 
No

Pages

Subscribe to RSS - research_news