Gravity Measurements Confirm Subsurface Ocean on Enceladus

In 2005, NASA's Cassini spacecraft sent pictures back to Earth depicting an icy Saturnian moon spewing water vapor and ice from fractures, known as "tiger stripes," in its frozen surface. It was big news that tiny Enceladus—a mere 500 kilometers in diameter—was such an active place. Since then, scientists have hypothesized that a large reservoir of water lies beneath that icy surface, possibly fueling the plumes. Now, using gravity measurements collected by Cassini, scientists have confirmed that Enceladus does in fact harbor a large subsurface ocean near its south pole, beneath those tiger stripes.

"For the first time, we have used a geophysical method to determine the internal structure of Enceladus, and the data suggest that indeed there is a large, possibly regional ocean about 50 kilometers below the surface of the south pole," says David Stevenson, the Marvin L. Goldberger Professor of Planetary Science at Caltech and an expert in studies of the interior of planetary bodies. "This then provides one possible story to explain why water is gushing out of these fractures we see at the south pole."

Stevenson is one of the authors on a paper that describes the finding in the current issue of the journal Science. Luciano Iess of Sapienza University of Rome is the paper's lead author.

During three flybys of Enceladus, between April 2010 and May 2012, the scientists collected extremely precise measurements of Cassini's trajectory by tracking the spacecraft's microwave carrier signal with NASA's Deep Space Network. The gravitational tug of a planetary body, such as Enceladus, alters a spacecraft's flight path ever so slightly. By measuring the effect of such deflections on the frequency of Cassini's signal as the orbiter traveled past Enceladus, the scientists were able to learn about the moon's gravitational field. This, in turn, revealed details about the distribution of mass within the moon.

"This is really the only way to learn about internal structure from remote sensing," Stevenson says. In fact, more precise measurements would require the placement of seismometers on Enceladus's surface—something that is certainly not going to happen anytime soon.

The key feature in the gravity data was a so-called negative mass anomaly at Enceladus's south pole. Put simply, such an anomaly exists when there is less mass in a particular location than would be expected in the case of a uniform spherical body. Since there is a known depression in the surface of Enceladus's south pole, the scientists expected to find a negative mass anomaly. However, the anomaly was quite a bit smaller than would be predicted by the depression alone.

"So, you say, 'Aha! This is compensated at depth,'" Stevenson says.

Such compensation for mass is commonly found on planetary bodies, including on Earth. In some cases, the absence of material at the surface is compensated at depth by the presence of denser material. In other cases, the presence of extra material at the surface is compensated by the existence of less dense material at depth. In fact, when the first gravity measurements were made in India, people were struck by the fact that Mount Everest did not seem to produce much of an effect. Today we know that, like most mountains on Earth, Mount Everest is compensated by a low-density root that extends many tens of kilometers below the surface. In other words, the material protruding above the surface is compensated by a reduction of density at depth.

In the case of Enceladus, the opposite is true. The absence of material at the surface is compensated at depth by the presence of material that is denser than ice. "The only sensible candidate for that material is water," Stevenson says. "So if I have this depression at the south pole, and I have beneath the surface 50 kilometers down a layer of water or an ocean, that layer of water at depth is a positive mass anomaly. Together the two anomalies account for our measurements."

Although no one can say for certain whether the subsurface ocean supplies the water that has been seen spraying out of the tiger stripes on Enceladus's surface, the scientists say that it is possible. The suspicion is that the fractures—in some way that is not yet fully understood—connect down to a part of the moon that is being tidally heated by the globe's repeated flexing as it traces its eccentric orbit. "Presumably the tidal heating is also replenishing the ocean," Stevenson says, "so it is possible that some of that water is making its way up through the tiger stripes."

The paper is titled "The Gravity Field and Interior Structure of Enceladus." Additional coauthors are Marzia Parisi, Douglas Hemingway, Robert A. Jacobson, Jonathan I. Lunine, Francis Nimmo, John W. Armstrong, Sami W. Asmar, Maria Ducci, and Paolo Tortora. The work was supported by the Italian Space Agency and by NASA through the Cassini project. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency, and the Italian Space Agency. The Jet Propulsion Laboratory manages the mission for NASA's Science Mission Directorate.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Quantum Photon Properties Revealed in Another Particle—the Plasmon

For years, researchers have been interested in developing quantum computers—the theoretical next generation of technology that will outperform conventional computers. Instead of holding data in bits, the digital units used by computers today, quantum computers store information in units called "qubits." One approach for computing with qubits relies on the creation of two single photons that interfere with one another in a device called a waveguide. Results from a recent applied science study at Caltech support the idea that waveguides coupled with another quantum particle—the surface plasmon—could also become an important piece of the quantum computing puzzle.

The work was published in the print version of the journal Nature Photonics the week of March 31.

As their name suggests, surface plasmons exist on a surface—in this case the surface of a metal, at the point where the metal meets the air. Metals are conductive materials, which means that electrons within the metal are free to move around. On the surface of the metal, these free electrons move together, in a collective motion, creating waves of electrons. Plasmons—the quantum particles of these coordinated waves—are akin to photons, the quantum particles of light (and all other forms of electromagnetic radiation).

"If you imagine the surface of a metal is like a sea of electrons, then surface plasmons are the ripples or waves on this sea," says graduate student Jim Fakonas, first author on the study.

These waves are especially interesting because they oscillate at optical frequencies. Therefore, if you shine a light at the metal surface, you can launch one of these plasmon waves, pushing the ripples of electrons across the surface of the metal. Because these plasmons directly couple with light, researchers have used them in photovoltaic cells and other applications for solar energy. In the future, they may also hold promise for applications in quantum computing.

However, the plasmon's odd behavior, which falls somewhere between that of an electron and that of a photon, makes it difficult to characterize. "According to quantum theory, it should be possible to analyze these plasmonic waves using quantum mechanics"—the physics that governs the behavior of matter and light at the atomic and subatomic scale—"in the same way that we can use it to study electromagnetic waves, like light," Fakonas says. However, in the past, researchers were lacking the experimental evidence to support this theory.

To find that evidence, Fakonas and his colleagues in the laboratory of Harry Atwater, Howard Hughes Professor of Applied Physics and Materials Science, looked at one particular phenomenon observed of photons—quantum interference—to see if plasmons also exhibit this effect.

The applied scientists borrowed their experimental technique from a classic test of quantum interference in which two single, identical photons are launched at one another through opposite sides of a 50/50 beam splitter, a device that acts as an imperfect mirror, reflecting half of the light that reaches its surface while allowing the the other half of the light to pass through. If quantum interference is observed, both identical photons must emerge together on the same side of the beam splitter, with their presence confirmed by photon detectors on both sides of the mirror.

Since plasmons are not exactly like photons, they cannot be used in mirrored optical beam splitters. Therefore, to test for quantum interference in plasmons, Fakonas and his colleagues made two waveguide paths for the plasmons on the surface of a tiny silicon chip. Because plasmons are very lossy—that is, easily absorbed into materials that surround them—the path is kept short, contained within a 10-micron-square chip, which reduces absorption along the way.

The waveguides, which together form a device called a directional coupler, act as a functional equivalent to a 50/50 beam splitter, directing the paths of the two plasmons to interfere with one another. The plasmons can exit the waveguides at one of two output paths that are each observed by a detector; if both plasmons exit the directional coupler together—meaning that quantum interference is observed—the pair of plasmons will only set off one of the two detectors.

Indeed, the experiment confirmed that two indistinguishable photons can be converted into two indistinguishable surface plasmons that, like photons, display quantum interference.

This finding could be important for the development of quantum computing, says Atwater. "Remarkably, plasmons are coherent enough to exhibit quantum interference in waveguides," he says. "These plasmon waveguides can be integrated in compact chip-based devices and circuits, which may one day enable computation and measurement schemes based on quantum interference."

Before this experiment, some researchers wondered if the photon–metal interaction necessary to create a surface plasmon would prevent the plasmons from exhibiting quantum interference. "Our experiment shows this is not a concern," Fakonas says.

"We learned something new about the quantum mechanics of surface plasmons. The main thing is that we were able to validate the theoretical prediction; we showed that this type of interference is possible with plasmons, and we did a pretty clean measurement," he says. "The quantum interference displayed by plasmons appeared to be almost identical to that of photons, so I think it would be very difficult for someone to design a different structure that would improve upon this result."

The work was published in a paper titled "Two-plasmon quantum interference." In addition to Fakonas and Atwater, the other coauthors are Caltech undergraduate Hyunseok Lee and former undergraduate Yousif A. Kelaita (BS '12). The work was supported by funding from the Air Force Office of Scientific Research, and the waveguide was fabricated at the Kavli Nanoscience Institute at Caltech.

Exclude from News Hub: 
News Type: 
Research News

New Method Could Improve Ultrasound Imaging

Caltech chemical engineer shows hidden potential of gas vesicles

One day while casually reading a review article, Caltech chemical engineer Mikhail Shapiro came across a mention of gas vesicles—tiny gas-filled structures used by some photosynthetic microorganisms to control buoyancy. It was a light-bulb moment. Shapiro is always on the lookout for new ways to enhance imaging techniques such as ultrasound or MRI, and the natural nanostructures seemed to be just the ticket to improve ultrasound imaging agents.

Now Shapiro and his colleagues from UC Berkeley and the University of Toronto have shown that these gas vesicles, isolated from bacteria and from archaea (a separate lineage of single-celled organisms), can indeed be used for ultrasound imaging. The vesicles could one day help track and reveal the growth, migration, and activity of a variety of cell types—from neurons to tumor cells—using noninvasive ultrasound, one of the most widely used imaging modalities in biomedicine.

A paper describing the work appears as an advance online publication in the journal Nature Nanotechnology

"People have struggled to make synthetic nanoscale imaging agents for ultrasound for many years," says Shapiro. "To me, it's quite amazing that we can borrow something that nature has evolved for a completely different purpose and use it for in vivo ultrasound imaging. It shows just how much nature has to offer us as engineers."

Ultrasound transmitters use sound waves to image biological tissue. When the emitted waves encounter something of a different density or stiffness, such as bone, some of the sound bounces back to the transducer. By measuring how long that round-trip journey takes, the system can determine how deep the object is and build up a picture of internal anatomy.

But what if you want to image something other than anatomy? Maybe you are interested in blood flow and want to see whether there are any signs of atherosclerosis, for example, in blood vessels. To make ultrasound useful in such cases, you need to introduce an imaging label that has a different density or stiffness from bodily tissue. Currently, people use microbubbles—small synthetic bubbles of gas with a lipid or protein shell—to image the vasculature. These microbubbles are less dense and more elastic than the water-based tissues of the body. As a result, they resonate and scatter sound waves, allowing ultrasound to visualize the location of the microbubbles.

Microbubbles work just fine, unless you want to image something outside the bloodstream. Because of their diameter—small, but still on the order of microns—the bubbles are too large to get out of the bloodstream and into surrounding tissue. And as Shapiro says, "Many interesting targets—such as specific types of tumors, immune cells, stem cells, or neurons—are outside the bloodstream."

A number of research teams have tried, without success, to make microbubbles smaller. There is a fundamental physical reason for their failure: bubbles are held together by surface tension. As you make them smaller, the surface tension builds, and the pressure within the bubble becomes too high in comparison to the pressure outside. That amounts to an unstable bubble that is likely to lose its gas to its surroundings.

The gas vesicles Shapiro's team worked with are at least an order of magnitude smaller than microbubbles—measuring just tens to hundreds of nanometers in diameter. And even though they look like bubbles, gas vesicles behave quite differently. Unlike bubbles, the vesicles do not trap gas molecules but allow them to pass freely in and out. Instead, they exclude water from their interior by having a hydrophobic inner surface. This results in a fundamentally stable nanoscale configuration.

 "As soon as I learned about them, I knew we had to try them," Shapiro says. 

The researchers first isolated gas vesicles from the bacterium Anabaena flos-aquae (Ana) and the archaeon Halobacterium NRC-1 (Halo), put them in an agarose gel, and used a home-built ultrasound system to image them. Vesicles from both sources produced clear ultrasound signals. Next, they injected the gas vesicles into mice and were able to follow the vesicles from the initial injection site to the liver, where blood flows to be detoxified. Shapiro and his colleagues were also able to easily attach biomolecules to the surface of the gas vesicles, suggesting that the gas vesicles could be used to label targets outside the bloodstream.

Shapiro's long-term goal is to take advantage of the fact that the gas vesicles are genetically encoded by engineering their properties at the DNA level and ultimately introducing the genes into mammalian cells to produce the structures themselves. For example, he would like to genetically label stem cells and use ultrasound to watch as they migrate to specific locations within the body and differentiate into tissues.

"Now that we have our hands on the genes that encode these gas vesicles, we can engineer them to optimize their properties, to see how far they can go," Shapiro says.

In their work, the researchers found differences in the gas vesicles produced by Ana and Halo. These variations could provide insight into how the vesicle design could be optimized for other purposes. For example, unlike the Ana vesicles, the Halo vesicles produced harmonic signals—meaning that they caused the original ultrasound wave to come back, as well as waves with doubled and tripled frequencies. Harmonics can be helpful in imaging because most tissue does not produce such signals; so when they show up, researchers know that they are more likely to be coming from the imaging agent than from the tissue.

Also, the gas vesicles from the two species collapsed, and thereby became invisible to ultrasound, with the application of different levels of pressure. Halo gas vesicles, which evolved in unpressurized cells, collapsed more easily than the vesicles from Ana, which maintain a pressurized cytoplasm. The researchers used this fact to distinguish the two different populations in a mixed sample. By applying a pressure pulse sufficient to collapse only the Halo vesicles, they were able to identify the remaining gas vesicles as having come from Ana.

Shapiro notes that there is a substantial difference between the critical collapse pressures of Halo and Ana. "There's quite a good possibility that, as we start to genetically engineer these nanostructures, we would be able to make new ones with intermediate collapse pressures," he says. "That would allow you to image a greater number of cells at the same time. This sort of multiplexing is done all the time in fluorescent imaging, and now we want to do it with ultrasound."

Along with Shapiro, coauthors on the paper, "Biogenic gas nanostructures as ultrasonic molecular reporters," are Patrick Goodwill, Arkosnato Neogy, David Schaffer, and Steven Conolly of UC Berkeley, and Melissa Yin and F. Stuart Foster of the University of Toronto. The work was supported by funding from the Miller Research Institute, the Burroughs Wellcome Fund's Career Award at the Scientific Interface, the California Institute of Regenerative Medicine, the National Institutes of Health, the Canadian Institutes of Health Research, and the Terry Fox Foundation.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

BICEP2 Discovers First Direct Evidence of Inflation and Primordial Gravitational Waves

Astronomers announced today that they have acquired the first direct evidence that gravitational waves rippled through our infant universe during an explosive period of growth called inflation. This is the strongest confirmation yet of cosmic inflation theories, which say the universe expanded by 100 trillion trillion times in less than the blink of an eye.

"The implications for this detection stagger the mind," says Jamie Bock, professor of physics at Caltech, laboratory senior research scientist at the Jet Propulsion Laboratory (JPL) and project co-leader. "We are measuring a signal that comes from the dawn of time."

Our universe burst into existence in an event known as the Big Bang 13.8 billion years ago. Fractions of a second later, space itself ripped apart, expanding exponentially in an episode known as inflation. Telltale signs of this early chapter in our universe's history are imprinted in the skies in a relic glow called the cosmic microwave background. Tiny fluctuations in this afterglow provide clues to conditions in the early universe.

Small, quantum fluctuations were amplified to enormous sizes by the inflationary expansion of the universe. This process created density waves that make small differences in temperature across the sky where the universe was denser, eventually condensing into galaxies and clusters of galaxies. But as theorized, inflation should also produce gravitational waves, ripples in space-time propagating throughout the universe. Observations from the BICEP2 telescope at the South Pole now demonstrate that gravitational waves were created in abundance during the early inflation of the universe.

On Earth, light can become polarized by scattering off surfaces, such as a car or pond, causing the glare that polarized sunglasses are designed to reduce. In space, the radiation of the cosmic microwave background, influenced by the squeezing of gravitational waves, was scattered by electrons, and became polarized, too.

Because gravitational waves have a "handedness"—they can have both left- and right-handed polarizations—they leave behind a characteristic pattern of polarization on the cosmic microwave background known as B-mode polarization. "The swirly B-mode pattern of polarization is a unique signature of gravitational waves," says collaboration co-leader Chao-Lin Kuo of Stanford University and the SLAC National Accelerator Laboratory. This is the first direct image of gravitational waves across the primordial sky."

In order to detect this B-mode polarization, the team examined spatial scales on the sky spanning about one to five degrees (two to 10 times the width of the full moon), which allowed them to gather photons from a broad swath of the cosmic microwave background in an area of the sky where we can see clearly through our own Milky Way galaxy. To do this, the team traveled to the South Pole to take advantage of the cold, dry, stable air. "The South Pole is the closest you can get to space and still be on the ground," says John Kovac of the Harvard-Smithsonian Center for Astrophysics, project co-leader and BICEP2 principal investigator. "It's one of the driest and clearest locations on Earth, perfect for observing the faint microwaves from the Big Bang."

The team also invented completely new technology for making these measurements. "Our approach was like building a camera on a printed circuit board," says Bock. "The circuit board included an antenna to focus and filter polarized light, a micro-machined detector that turns the radiation into heat, and a superconducting thermometer to measure this heat." The detector arrays were made at JPL's Microdevices Laboratory.

The BICEP2 team was surprised to detect a B-mode polarization signal considerably stronger than many cosmologists expected. The team analyzed the data for more than three years in an effort to rule out any errors. They also considered whether dust in our galaxy could produce the observed pattern, but the data suggest this is highly unlikely. "This has been like looking for a needle in a haystack, but instead we found a crowbar," says project co-leader Clem Pryke, of the University of Minnesota.

The prediction that the cosmic microwave background would show a B-mode polarization from gravitational waves produced during the inflationary period was made in 1996 by several theoretical physicists including Marc Kamionkowski, who was a member of the Caltech faculty from 1999 to 2011, and is now on the faculty at Johns Hopkins University. Kamionkowski says this discovery "is powerful evidence for inflation. I'd call it a smoking gun. We've now learned that gravitational waves are abundant, and can learn more about the process that powered inflation. This is a remarkable advance in cosmology."

The BICEP project originated at Caltech in 2002 as a collaboration between Bock and the late physicist Andrew Lange.

BICEP2 is the second stage of a coordinated program with the BICEP and Keck Array experiments, which has a co-PI structure. The four principal investigators are Bock, Kovac, Kuo, and Pryke. All have worked together on the present result, along with talented teams of students and scientists. Other major collaborating institutions for BICEP2 include the University of California at San Diego, the University of British Columbia, the National Institute of Standards and Technology, the University of Toronto, Cardiff University, and Commissariat à l'energie atomique.

BICEP2 is funded by the National Science Foundation. NSF also runs the South Pole Station where BICEP2 and the other telescopes used in this work are located. The W. M. Keck Foundation also contributed major funding for the construction of the team's telescopes. NASA, JPL, and the Gordon and Betty Moore Foundation generously supported the development of the ultrasensitive detector arrays that made these measurements possible.

There are two papers, published March 17, 2014, reporting these results: "BICEP2 I: Detection of B-mode polarization at degree angular scales" and "BICEP2 II: Experiment and Three-Year Data Set."

The journal papers, along with additional technical details, can be found on the BICEP2 release website.

Exclude from News Hub: 
News Type: 
Research News

An Equation to Describe the Competition Between Genes

Caltech researchers develop and verify predictive mathematical model

In biology, scientists typically conduct experiments first, and then develop mathematical or computer models afterward to show how the collected data fit with theory. In his work, Rob Phillips flips that practice on its head. The Caltech biophysicist tackles questions in cellular biology as a physicist would—by first formulating a model that can make predictions and then testing those predictions. Using this strategy, Phillips and his group have recently developed a mathematical model that accounts for the way genes compete with each other for the proteins that regulate their expression.

A paper describing the work appears in the current issue of the journal Cell. The lead authors on the paper are Robert Brewster and Franz Weinert, postdoctoral scholars in Phillips's lab.

"The thing that makes this study really interesting is that we did our calculations before we ever did any experiments," says Phillips, the Fred and Nancy Morris Professor of Biophysics and Biology at Caltech and principal investigator on the study. "Just as it is amazing that we have equations for the orbits of planets around stars, I think it's amazing that we are beginning to be able to write equations that predict the complex behaviors of a living cell."

A number of research teams are interested in modeling gene expression—accurately describing all the processes involved in going from a gene to the protein or other product encoded by that DNA. For simplicity's sake, though, most such models do not take competition into consideration. Instead, they assume that each gene has plenty of whatever it needs in order to be expressed—including the regulatory proteins called transcription factors. However, Phillips points out, there often is not enough transcription factor around to regulate all of the genes in a cell.  For one thing, multiple copies of a gene can exist within the same cell. For example, in the case of genes expressed on circular pieces of DNA known as plasmids, it is common to find hundreds of copies in a single cell. In addition, many transcription factors are capable of binding to a variety of different genes. So, as in a game of musical chairs, the genes must compete for a scarce resource—the transcription factors.

Phillips and his colleagues wanted to create a more realistic model by adding in this competition. To do so, they looked at how the level of gene expression varies depending on the amount of transcription factor present in the cell. To limit complexity, they worked with a relatively simple case—a gene in the bacterium E. coli that has just one binding site where a transcription factor can attach. In this case, when the transcription factor binds to the gene, it actually prevents the gene from making its product—it represses expression.

To build their mathematical model, the researchers first considered all the various ways in which the available transcription factor can interact with the copies of this particular gene that are present in the cell, and then developed a statistical theory to represent the situation.

"Imagine that you go into an auditorium, and you know there are a certain number of seats and a certain number of people. There are many different seating arrangements that could accommodate all of those people," Phillips says. "If you wanted to, you could systematically enumerate all of those arrangements and figure out things about the statistics—how often two people will be sitting next to each other if it's purely random, and so on. That's basically what we did with these genes and transcription factors."

Using the resulting model, the researchers were able to make predictions about what would happen if the level of transcription factor and the number of gene copies were independently varied so that the proteins were either in high demand or there were plenty to go around, for example.

With predictions in hand, the researchers next conducted experiments while looking at E. coli cells under a microscope. To begin, they introduced the genes on plasmids into the cells. They needed to track exactly how much transcription factor was present and the rate of gene expression in the presence of that level of transcription factor. Using fluorescent proteins, they were able to follow these changes in the cell over time: the transcription factor lit up red, while the protein expressed by the gene without the transcription factor attached glowed green. Using video fluorescence microscopy and a method, developed in the lab of Caltech biologist Michael Elowitz, for determining the brightness of a single molecule, the researchers were able to count the level of transcription factor present and the rate at which the green protein was produced as the cells grew and divided.

The team found that the experimental data matched the predictions they had made extremely well. "As expected, we find that there are two interesting regimes," says Brewster. "One is that there's just not enough protein to fill the demand. Therefore, all copies of the gene cannot be repressed simultaneously, and some portion will glow green all the time. In that case, there are correlations between the various copies of the genes. They know, in some sense, that the others exist. The second case is that there is a ton of this transcription factor around; in that case, the genes act almost exactly as if the other genes aren't there—there is enough protein to shut off all of the genes simultaneously."

The data fit so well with their model, in fact, that Phillips and his colleagues were able to use plots of the data to predict how many copies of the plasmid would be found in a cell as it grew and multiplied at various points throughout the cell cycle.

"Many times in science you start out trying to understand something, and then you get so good at understanding it that you are able to use it as a tool to measure something else," says Phillips. "Our model has become a tool for measuring the dynamics of how plasmids multiply. And the dynamics of how they multiply isn't what we would have naively expected. That's a little hint that we're pursuing right now."

Overall, he says, "this shows that the assertion that biology is too complicated to be predictive might be overly pessimistic, at least in the context of bacteria."

The work described in the paper, "The Transcription Factor Titration Effect Dictates Level of Gene Expression," was supported by the National Institutes of Health and by the Fondation Pierre-Gilles de Gennes. Additional coauthors are Mattias Rydenfelt, a graduate student in physics at Caltech; Hernan Garcia, a former member of Phillips's lab who is now at Princeton University; and Dan Song, a graduate student at Harvard Medical School.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Research Update: Battling Infection With Microbes

A relationship between gut bacteria and blood cell development helps the immune system fight infection, Caltech researchers say.

The human relationship with microbial life is complicated. At almost any supermarket, you can pick up both antibacterial soap and probiotic yogurt during the same shopping trip. Although there are types of bacteria that can make us sick, Caltech professor of biology and biological engineering Sarkis Mazmanian and his team are most interested in the thousands of other bacteria—many already living inside our bodies—that actually keep us healthy. His past work in mice has shown that restoring populations of beneficial bacteria can help alleviate the symptoms of inflammatory bowel disease, multiple sclerosis, and even autism. Now, he and his team have found that these good bugs might also prepare the immune cells in our blood to fight infections from harmful bacteria.

In the recent study, published on March 12 in the journal Cell Host & Microbe, the researchers found that beneficial gut bacteria were necessary for the development of innate immune cells—specialized types of white blood cells that serve as the body's first line of defense against invading pathogens.

In addition to circulating in the blood, reserve stores of immune cells are also kept in the spleen and in the bone marrow. When the researchers looked at the immune cell populations in these areas in so-called germ-free mice, born without gut bacteria, and in healthy mice with a normal population of microbes in the gut, they found that germ-free mice had fewer immune cells—specifically macrophages, monocytes, and neutrophils—than healthy mice.

Germ-free mice also had fewer granulocyte and monocyte progenitor cells, stemlike cells that can eventually differentiate into a few types of mature immune cells. And the innate immune cells that were in the spleen were defective—never fully reaching the proportions found in healthy mice with a diverse population of gut microbes.

"It's interesting to see that these microbes are having an immune effect beyond where they live in the gut," says Arya Khosravi, a graduate student in Mazmanian's lab, and first author on the recent study. "They're affecting places like your blood, spleen, and bone marrow—places where there shouldn't be any bacteria."

Khosravi and his colleagues next wanted to see if the reduction in immune cells in the blood would make the germ-free mice less able to fight off an infection by the harmful bacterium Listeria monocytogenes—a well-studied human pathogen often used to study immune responses in mice. While the healthy mice were able to bounce back after being injected with Listeria, the infection was fatal to germ-free mice. When gut microbes that would normally be present were introduced into germ-free mice, the immune cell population increased and the mice were able to survive the Listeria infection.

The researchers also gave injections of Listeria to healthy mice after those mice were dosed with broad-spectrum antibiotics that killed off both harmful and beneficial bacteria. Interestingly, these mice also had trouble fighting the Listeria infection. "We didn't look at clinical data in this study, but we hypothesize that this might also happen in the clinic," says Mazmanian. "For example, when patients are put on antibiotics for something like hip surgery, are you damaging their gut microbe population and making them more susceptible to an infection that had nothing to do with their hip surgery?"

More importantly, the research also suggests that a healthy population of gut microbes can actually provide a preventative alternative to antibiotics, Khosravi says. "Today there are more and more antibiotic resistant superbugs out there, and we're running out of ways to treat them. Limiting our susceptibility to infection could be a good protective strategy."

These results appear in a paper titled "Gut Microbiota Promote Hematopoiesis to Control Bacterial Infection."

Listing Title: 
Battling Infection With Microbes
Exclude from News Hub: 
News Type: 
Research News

Bending the Light with a Tiny Chip

A silicon chip developed by Caltech researchers acts as a lens-free projector—and could one day end up in your cell phone.

Imagine that you are in a meeting with coworkers or at a gathering of friends. You pull out your cell phone to show a presentation or a video on YouTube. But you don't use the tiny screen; your phone projects a bright, clear image onto a wall or a big screen. Such a technology may be on its way, thanks to a new light-bending silicon chip developed by researchers at Caltech.

The chip was developed by Ali Hajimiri, Thomas G. Myers Professor of Electrical Engineering, and researchers in his laboratory. The results were presented at the Optical Fiber Communication (OFC) conference in San Francisco on March 10.

Traditional projectors—like those used to project a film or classroom lecture notes—pass a beam of light through a tiny image, using lenses to map each point of the small picture to corresponding, yet expanded, points on a large screen. The Caltech chip eliminates the need for bulky and expensive lenses and bulbs and instead uses a so-called integrated optical phased array (OPA) to project the image electronically with only a single laser diode as light source and no mechanically moving parts.

Hajimiri and his colleagues were able to bypass traditional optics by manipulating the coherence of light—a property that allows the researchers to "bend" the light waves on the surface of the chip without lenses or the use of any mechanical movement. If two waves are coherent in the direction of propagation—meaning that the peaks and troughs of one wave are exactly aligned with those of the second wave—the waves combine, resulting in one wave, a beam with twice the amplitude and four times the energy as the initial wave, moving in the direction of the coherent waves.

"By changing the relative timing of the waves, you can change the direction of the light beam," says Hajimiri. For example, if 10 people kneeling in line by a swimming pool slap the water at the exact same instant, they will make one big wave that travels directly away from them. But if the 10 separate slaps are staggered—each person hitting the water a half a second after the last—there will still be one big, combined wave, but with the wave bending to travel at an angle, he says.

Using a series of pipes for the light—called phase shifters—the OPA chip similarly slows down or speeds up the timing of the waves, thus controlling the direction of the light beam. To form an image, electronic data from a computer are converted into multiple electrical currents; by applying stronger or weaker currents to the light within the phase shifter, the number of electrons within each light path changes—which, in turn, changes the timing of the light wave in that path. The timed light waves are then delivered to tiny array elements within a grid on the chip. The light is then projected from each array in the grid, the individual array beams combining coherently in the air to form a single light beam and a spot on the screen.

As the electronic signal rapidly steers the beam left, right, up, and down, the light acts as a very fast pen, drawing an image made of light on the projection surface. Because the direction of the light beam is controlled electronically—not mechanically—it can create a sort of line very quickly. Since the light draws many times per second, the eye sees the process as a single image instead of a moving light beam, says Hajimiri.

"The new thing about our work is really that we can do this on a tiny, one-millimeter-square silicon chip, and the fact that we can do it very rapidly—rapidly enough to form images, since we phase-shift electronically in two dimensions," says Behrooz Abiri, a graduate student in Hajimiri's group and a coauthor on the paper. So far, the images Hajimiri and his team can project with the current version of the chip are somewhat simple—a triangle, a smiley face, or single letters, for example. However, the researchers are currently experimenting with larger chips that include more light-delivering array elements that—like using a larger lens on a camera—can improve the resolution and increase the complexity of the projected images.

In their recent experiments, Hajimiri and his colleagues have used the silicon chip to project images in infrared light, but additional work with different types of semiconductors will also allow the researchers to expand the tiny projector's capabilities into the visible spectrum. "Right now we are using silicon technology, which works better with infrared light. If you want to project visible light, you can take the exact same architecture and do it in what's called compound semiconductor III-V technology," says Firooz Aflatouni, another coauthor on the paper, who in January finished his two-year postdoctoral appointment at Caltech and joined the University of Pennsylvania as an assistant professor. "Silicon is good because it can be easily integrated into electronics, but these other compound semiconductors could be used to do the same thing."

"In the future, this can be incorporated into a phone, and since there is no need for a lens, you can have a phone that acts as a projector all by itself," Hajimiri says. However, although the chip could easily be incorporated into a cell phone, he points out that a tiny projection device can have many applications—including light-based radar systems (called "LIDAR"), which are used in positioning, robotics, geographical measurements, and mapmaking. Such equipment already exists, but current LIDAR technology requires complex, bulky, and expensive equipment—equipment that could be streamlined and simplified to a single chip at a much lower cost.

"But I don't want to limit the device to just a few purposes. The beauty of this thing is that these chips are small and can be made at a very low cost—and this opens up lots of interesting possibilities," he says.

These results were described in a presentation titled "Electronic Two-Dimensional Beam Steering for Integrated Optical Phased Arrays." Along with Hajimiri, Abiri, and Aflatouni, Caltech senior Angad Rekhi also contributed to the work. The study was funded by grants from the Caltech Innovation Initiative, and the Information Science and Technology initiative at Caltech.

Exclude from News Hub: 
News Type: 
Research News

Building Artificial Cells Will Be a Noisy Business

Engineers like to make things that work. And if one wants to make something work using nanoscale components—the size of proteins, antibodies, and viruses—mimicking the behavior of cells is a good place to start since cells carry an enormous amount of information in a very tiny packet. As Erik Winfree, professor of computer science, computation and neutral systems, and bioengineering, explains, "I tend to think of cells as really small robots. Biology has programmed natural cells, but now engineers are starting to think about how we can program artificial cells. We want to program something about a micron in size, finer than the dimension of a human hair, that can interact with its chemical environment and carry out the spectrum of tasks that biological things do, but according to our instructions."

Getting tiny things to behave is, however, a daunting task. A central problem bioengineers face when working at this scale is that when biochemical circuits, such as the one Winfree has designed, are restricted to an extremely small volume, they may cease to function as expected, even though the circuit works well in a regular test tube. Smaller populations of molecules simply do not behave the same as larger populations of the same molecules, as a recent paper in Nature Chemistry demonstrates.

Winfree and his coauthors began their investigation of the effect of small sample size on biochemical processes with a biochemical oscillator designed in Winfree's lab at Caltech. This oscillator is a solution composed of small synthetic DNA molecules that are activated by RNA transcripts and enzymes. When the DNA molecules are activated by the other components in the solution, a biological circuit is created. This circuit fluoresces in a rhythmic pulse for approximately 15 hours until its chemical reactions slow and eventually stop.

The researchers then "compartmentalized" the oscillator by reducing it from one large system in a test tube to many tiny oscillators. Using an approach developed by Maximilian Weitz and colleagues at the Technical University of Munich and former Caltech graduate student Elisa Franco, currently an assistant professor of mechanical engineering at UC Riverside, an aqueous solution of the DNA, RNA, and enzymes that make up the biochemical oscillator was mixed with oil and shaken until small portions of the solution, each containing a tiny oscillator, were isolated within droplets surrounded by oil.

"After the oil is added and shaken, the mixture turns into a cream, called an emulsion, that looks somewhat like a light mayonnaise," says Winfree. "We then take this cream, pour it on a glass slide and spread it out, and observe the patterns of pulsing fluorescence in each droplet under a microscope." 

When a large sample of the solution is active, it fluoresces in regular pulses. The largest droplets behave as the entire solution does: fluorescing mostly in phase with one another, as though separate but still acting in concert. But the behavior of the smaller droplets was found to be much less consistent, and their pulses of fluorescence quickly moved out of phase with the larger droplets.

Researchers had expected that the various droplets, especially the smaller ones, would behave differently from one another due to an effect known as stochastic reaction dynamics. The specific reactions that make up a biochemical circuit may happen at slightly different times in different parts of a solution. If the solution sample is large enough, this effect is averaged out, but if the sample is very small, these slight differences in the timing of reactions will be amplified. The sensitivity to droplet size can be even more significant depending on the nature of the reactions. As Winfree explains, "If you have two competing reactions—say x could get converted to y, or x could get converted to z, each at the same rate—then if you have a test tube–sized sample, you will end up with a something that is half y and half z. But if you only have four molecules in a droplet, then perhaps they will all convert to y, and that's that: there's no z to be found."

In their experiments on the biochemical oscillator, however, Winfree and his colleagues discovered that this source of noise—stochastic reaction dynamics—was relatively small compared to a source of noise that they did not anticipate: partitioning effects. In other words, the molecules that were captured in each droplet were not exactly the same. Some droplets initially had more molecules, while others had fewer; also, the ratio between the various elements was different in different droplets. So even before the differential timing of reactions could create stochastic dynamics, these tiny populations of molecules started out with dissimilar features. The differences between them were then further amplified as the biochemical reactions proceeded.

"To make an artificial cell work," says Winfree, "you need to know what your sources of noise are. The dominant thought was that the noise you're confronted with when you're engineering nanometer-scale components has to do with randomness of chemical reactions at that scale. But this experience has taught us that these stochastic reaction dynamics are really the next-level challenge. To get to that next level, first we have to learn how to deal with partitioning noise."

For Winfree, this is an exciting challenge: "When I program my computer, I can think entirely in terms of deterministic processes. But when I try to engineer what is essentially a program at the molecular scale, I have to think in terms of probabilities and stochastic (random) processes. This is inherently more difficult, but I like challenges. And if we are ever to succeed in creating artificial cells, these are the sorts of problems we need to address."

Coauthors of the paper, "Diversity in the dynamical behaviour of a compartmentalized programmable biochemical oscillator," include Maximilian Weitz, Korbinian Kapsner, and Friedrich C. Simmel of the Technical University of Munich; Jongmin Kim of Caltech; and Elisa Franco of UC Riverside. The project was funded by the National Science Foundation, UC Riverside, the European Commission, the German Research Foundation Cluster of Excellence Nanosystems Initiative Munich, and the Elite Network of Bavaria.

Cynthia Eller
Exclude from News Hub: 
News Type: 
Research News

Detection of Water Vapor in the Atmosphere of a Hot Jupiter

Caltech researchers develop a new technique to find water vapor on extrasolar planets

Although liquid water covers a majority of Earth's surface, scientists are still searching for planets outside of our solar system that contain water. Researchers at Caltech and several other institutions have used a new technique to analyze the gaseous atmospheres of such extrasolar planets and have made the first detection of water in the atmosphere of the Jupiter-mass planet orbiting the nearby star tau Boötis. With further development and more sensitive instruments, this technique could help researchers learn about how many planets with water—like Earth—exist within our galaxy.

The new results are described in the February 24 online version of The Astrophysical Journal Letters.

Scientists have previously detected water vapor on a handful of other planets, but these detections could only take place under very specific circumstances, says graduate student Alexandra Lockwood, the first author of the study. "When a planet transits—or passes in orbit in front of—its host star, we can use information from this event to detect water vapor and other atmospheric compounds," she says. "Alternatively, if the planet is sufficiently far away from its host star, we can also learn about a planet's atmosphere by imaging it."

However, significant portions of the population of extrasolar planets do not fit either of these criteria, and there was not really a way to find information about the atmospheres of these planets. Looking to resolve this problem, Lockwood and her adviser Geoffrey Blake, professor of cosmochemistry and planetary sciences and professor of chemistry, applied a novel technique for finding water in a planetary atmosphere. Other researchers had used similar approaches previously to detect carbon monoxide in tau Boötis b.

The method utilized the radial velocity (RV) technique—a technique commonly used in the visible region of the spectrum to which our eyes are sensitive—for discovering non-transiting exoplanets. Using the Doppler effect, RV detection traditionally determines the motion of a star due to the gravitational pull of a companion planet; the star moves opposite that of the orbital motion of the planet, and the stellar features shift in wavelength. A large planet or a planet closer to its host star provides a larger shift.

Lockwood, Blake, and their colleagues expanded the RV technique into the infrared to determine the orbit of tau Boötis b around its star, and added further analysis of the light shifts via spectroscopy—an analysis of the light's spectrum. Since every compound emits a different wavelength of light, this unique light signature allows the researchers to analyze molecules that make up the planet's atmosphere. Using data of tau Boötis b from the Near Infrared Echelle Spectrograph (NIRSPEC) at the W. M. Keck Observatory in Hawaii, the researchers were able to compare the molecular signature of water to the light spectrum emitted by the planet, confirming that the atmosphere did indeed include water vapor.

"The information we get from the spectrograph is like listening to an orchestra performance; you hear all of the music together, but if you listen carefully, you can pick out a trumpet or a violin or a cello, and you know that those instruments are present," she says. "With the telescope, you see all of the light together, but the spectrograph allows you to pick out different pieces; like this wavelength of light means that there is sodium, or this one means that there's water."

In addition to using the spectrographic technique to study the planet's atmospheric composition, the method also provides a way for researchers to analyze the mass of planets. "They're actually two separate findings, but they're both very exciting," says Lockwood. "When you're doing calculations to look for the atmospheric signature—which tells you that there's water present—you also determine the 3-D motion of the star and the planet in the system. With this information, if you also know the mass of the star, you can determine the mass of the planet," she says.

Previous RV methods for measuring a planet's mass could only determine the planet's indicative mass—an estimation of its minimum mass, which might be much less than its actual mass. This new technique provides a way to measure the true mass of a planet since both light from the star and the planet are detected, which is critical for understanding how planets and planetary systems form and evolve.

Although the technique promises to augment how planetary scientists analyze the properties of extrasolar planets, it has limitations, the researchers say. For example, the technique is presently limited to so-called "hot Jupiter" gas giant planets like tau Boötis b—those that are large and orbit very close to their host star.

"The technique is limited by the light-collecting power and wavelength range of the telescope, and even with the incredible collecting area of the Keck Observatory mirror on the high, dry summit of Mauna Kea we can basically only analyze hot planets that are orbiting bright stars, but that could be expanded in the future as telescopes and infrared spectrographs improve," Lockwood says. In the future, in addition to analyzing cooler planets and dimmer stars, the researchers plan to continue looking for and analyzing the abundance of other molecules that might be present in the atmosphere of tau Boötis b.

"While the current state of the technique cannot detect earthlike planets around stars like the Sun, with Keck it should soon be possible to study the atmospheres of the so-called 'super-Earth' planets being discovered around nearby low-mass stars, many of which do not transit," Blake says. "Future telescopes such as the James Webb Space Telescope and the Thirty Meter Telescope (TMT) will enable us to examine much cooler planets that are more distant from their host stars and where liquid water is more likely to exist."

The findings appear in the The Astrophysical Journal Letters in a paper titled "Near-IR Direct Detection of Water Vapor in tau Boötis b." Other coauthors include collaborators from Pennsylvania State University, the Naval Research Laboratory, the University of Arizona, and the Harvard-Smithsonian Center for Astrophysics. The work was funded by the National Science Foundation Graduate Research Fellowship Program, the David and Lucile Packard and Alfred P. Sloan foundations, and the Penn State Center for Exoplanets and Habitable Worlds.

Exclude from News Hub: 
News Type: 
Research News

A New Laser for a Faster Internet

A new laser developed by a research group at Caltech holds the potential to increase by orders of magnitude the rate of data transmission in the optical-fiber network—the backbone of the Internet.

The study was published the week of February 10–14 in the online edition of the Proceedings of the National Academy of Sciences. The work is the result of a five-year effort by researchers in the laboratory of Amnon Yariv, Martin and Eileen Summerfield Professor of Applied Physics and professor of electrical engineering; the project was led by postdoctoral scholar Christos Santis (PhD '13) and graduate student Scott Steger.

Light is capable of carrying vast amounts of information—approximately 10,000 times more bandwidth than microwaves, the earlier carrier of long-distance communications. But to utilize this potential, the laser light needs to be as spectrally pure—as close to a single frequency—as possible. The purer the tone, the more information it can carry, and for decades researchers have been trying to develop a laser that comes as close as possible to emitting just one frequency.

Today's worldwide optical-fiber network is still powered by a laser known as the distributed-feedback semiconductor (S-DFB) laser, developed in the mid 1970s in Yariv's research group. The S-DFB laser's unusual longevity in optical communications stemmed from its, at the time, unparalleled spectral purity—the degree to which the light emitted matched a single frequency. The laser's increased spectral purity directly translated into a larger information bandwidth of the laser beam and longer possible transmission distances in the optical fiber—with the result that more information could be carried farther and faster than ever before.

At the time, this unprecedented spectral purity was a direct consequence of the incorporation of a nanoscale corrugation within the multilayered structure of the laser. The washboard-like surface acted as a sort of internal filter, discriminating against spurious "noisy" waves contaminating the ideal wave frequency. Although the old S-DFB laser had a successful 40-year run in optical communications—and was cited as the main reason for Yariv receiving the 2010 National Medal of Science—the spectral purity, or coherence, of the laser no longer satisfies the ever-increasing demand for bandwidth.

"What became the prime motivator for our project was that the present-day laser designs—even our S-DFB laser—have an internal architecture which is unfavorable for high spectral-purity operation. This is because they allow a large and theoretically unavoidable optical noise to comingle with the coherent laser and thus degrade its spectral purity," he says.

The old S-DFB laser consists of continuous crystalline layers of materials called III-V semiconductors—typically gallium arsenide and indium phosphide—that convert into light the applied electrical current flowing through the structure. Once generated, the light is stored within the same material. Since III-V semiconductors are also strong light absorbers—and this absorption leads to a degradation of spectral purity—the researchers sought a different solution for the new laser.

The high-coherence new laser still converts current to light using the III-V material, but in a fundamental departure from the S-DFB laser, it stores the light in a layer of silicon, which does not absorb light. Spatial patterning of this silicon layer—a variant of the corrugated surface of the S-DFB laser—causes the silicon to act as a light concentrator, pulling the newly generated light away from the light-absorbing III-V material and into the near absorption-free silicon.

This newly achieved high spectral purity—a 20 times narrower range of frequencies than possible with the S-DFB laser—could be especially important for the future of fiber-optic communications. Originally, laser beams in optic fibers carried information in pulses of light; data signals were impressed on the beam by rapidly turning the laser on and off, and the resulting light pulses were carried through the optic fibers. However, to meet the increasing demand for bandwidth, communications system engineers are now adopting a new method of impressing the data on laser beams that no longer requires this "on-off" technique. This method is called coherent phase communication.

In coherent phase communications, the data resides in small delays in the arrival time of the waves; the delays—a tiny fraction (10-16) of a second in duration—can then accurately relay the information even over thousands of miles. The digital electronic bits carrying video, data, or other information are converted at the laser into these small delays in the otherwise rock-steady light wave. But the number of possible delays, and thus the data-carrying capacity of the channel, is fundamentally limited by the degree of spectral purity of the laser beam. This purity can never be absolute—a limitation of the laws of physics—but with the new laser, Yariv and his team have tried to come as close to absolute purity as is possible.

These findings were published in a paper titled, "High-coherence semiconductor lasers based on integral high-Q resonators in hybrid Si/III-V platforms." In addition to Yariv, Santis, and Steger, other Caltech coauthors include graduate student Yaakov Vilenchik, and former graduate student Arseny Vasilyev (PhD, '13). The work was funded by the Army Research Office, the National Science Foundation, and the Defense Advanced Research Projects Agency. The lasers were fabricated at the Kavli Nanoscience Institute at Caltech.

Exclude from News Hub: 
News Type: 
Research News