Unlocking a Mystery of Human Disease . . . in Space

An experiment just launched into orbit by a team of Caltech researchers could be an important step toward understanding a devastating neurodegenerative disease.

Huntington's disease is a grim diagnosis. A hereditary disorder with debilitating physical and cognitive symptoms, the disease usually robs adult patients of their ability to walk, balance, and speak. More than 15 years ago, researchers revealed the disorder's likely cause—an abnormal version of the protein huntingtin; however, the mutant protein's mechanism is poorly understood, and the disease remains untreatable.

Now, a new project led by Pamela Bjorkman, Max Delbrück Professor of Biology, will investigate whether the huntingtin protein can form crystals in microgravity aboard the International Space Station (ISS)—crystals that are crucial for understanding the molecular structure of the protein. The experiment was launched from Cape Canaveral in Florida on Friday, April 18 aboard the SpaceX CRS-3 cargo resupply mission to the ISS. On Sunday, April 20 the station's robotic arm captured the mission's payload, which included the proteins for Bjorkman's experiment—which is the first Caltech experiment to take place aboard the ISS.

In the experiment, the researchers hope to grow a crystal of the huntingtin protein—the crystal would be an organized, latticelike arrangement of the protein's molecules—which is needed to determine the molecular structure of the protein. However, molecules of the huntingtin protein tend to aggregate, or clump together, in Earth's gravity. And this disordered arrangement makes it incredibly hard to parse the protein's structure, says Gwen Owens, a graduate student in Bjorkman's lab and a researcher who helped design the study.

"We need crystals for X-ray crystallography, the technique we use to study the protein, in which we shoot an X-ray through the protein crystal and analyze the organized pattern of radiation that scatters off of it," Owens says. "That pattern is what we depend on to identify the location of every carbon, nitrogen, and sulfur atom within the protein; if we shoot an X-ray beam at a clumped, aggregate protein—like huntingtin often is—we can't get any data from it," she says.

Researchers have previously studied small fragments of crystallized huntingtin, but because of its large size and propensity to clumping, no one has ever successfully grown a crystal of the full-length protein large enough to analyze with X-ray crystallography. To understand what the protein does—and how defects in it lead to the symptoms of Huntington's disease—the researchers need to study the full-length protein.

Looking for a solution to this problem, Owens was inspired by a few previous studies of protein formation on space shuttles and the ISS—studies suggesting that proteins can form crystals more readily in a condition of near-weightlessness called microgravity. "The previous studies looked at much simpler proteins, but we thought we could make a pretty good case that huntingtin would be an excellent candidate to study on the ISS," Owens says.

They proposed such an experiment to the Center for the Advancement of Science in Space (CASIS), which manages U.S. research on the ISS, and it was accepted, becoming part of the first Advancing Research Knowledge, or ARK1, mission.

Because Owens and Bjorkman cannot travel with their proteins, and staff and resources are limited aboard the ISS, the crystal will be grown with a Handheld High-Density Protein Crystal Growth device—an apparatus that will allow astronauts to initiate growth of normal and mutant huntingtin protein crystals from a solution of protein molecules with just the flip of a switch.

As the crystals grow larger over a period of several months, samples will come back to Earth via the SpaceX CRS-4 return mission. The results of the experiment are scheduled to drop into the ocean just off the coast of Southern California—along with the rest of the return cargo—sometime this fall. At that point, Owens will finally be able to analyze the proteins.

"Our ideal result would be to have large crystals of the normal and mutant huntingtin proteins right away—on the first try," she says. After analyzing crystals of the full-length protein with X-ray crystallography, the researchers could finally determine huntingtin's structure—information that will be crucial to developing treatments for Huntington's disease.

Owens, a joint MD/PhD student at Caltech and UCLA's David Geffen School of Medicine, has also had the opportunity to work with Huntington's disease patients in the clinic, adding a human connection to her experiment in the sky. "The patients and families I have met who are affected by Huntington's disease are excited to see something big like this. It's inspiring for them—and hopefully it will inspire new research, too."

Exclude from News Hub: 

Hyperbolic Homogeneous Polynomials, Oh My!

Cutting-edge mathematics today, at least to the uninitiated, often sounds as if it bears no relation to the arithmetic we all learned in grade school. What do topology and combinatorics and n-dimensional space have to do with addition, subtraction, multiplication, and division? Yet there remains within mathematics one vibrant field of study that makes constant reference to basic arithmetic: number theory. Number theory—the "queen of mathematics," according to the famous 19th century mathematician Carl Friedrich Gauss—takes integers as its starting point. Begin counting 1, 2, 3, and you enter the domain of number theory.

Number theorists are particularly interested in prime numbers (those integers that cannot be divided by any number other than itself and 1) and Diophantine equations. Diophantine equations are polynomial equations (those with two or more variables) in which the coefficients are all integers.

It is these equations that are the inspiration for a recent proof offered by Dinakar Ramakrishnan, Caltech's Taussky-Todd-Lonergan Professor of Mathematics and executive officer for mathematics, and his coauthor, Mladen Dimitrov, formerly an Olga Taussky and John Todd Instructor in Mathematics at Caltech and now professor of mathematics at the University of Lille in France. This proof involves homogeneous equations: equations in which all the terms have the same degree. For example, the polynomial xy + z2 has degree 2, and x2yz + xy3 has degree 4.  If we take an equation like xy = z2, one solution for (x, y, z) would be (1, 4, 2). Multiplying that solution by any rational number will give infinitely many rational solutions, but this is a trivial way to get solutions achieved simply by "scaling." These are not the type of answers Ramakrishnan and Dimitrov were searching for.

What Ramakrishnan and Dimitrov showed is that a specific collection of systems of homogeneous equations with six variables has only a finite number of rational solutions (up to scaling). Usually people look for integer solutions of Diophantine equations, but the first approach is to find solutions in rational numbers—those that can be expressed as a fraction of two integers.

Diophantus, after whom the Diophantine equations are named, is best known for his Arithmetica, which Ramakrishnan describes as "a collection of intriguing mathematical problems, some of them original to Diophantus, others an assemblage of earlier work, some of it possibly going back to the Babylonians." Diophantus lived in the city of Alexandria, in what is now Egypt, during the third century CE. What makes the Arithmetica unusual is that it continues to serve as the basis for some very interesting mathematics more than 1,700 years later.

Diophantus was interested primarily in positive integers. He was aware of the existence of rational numbers, since he knew integers could divide one another, but he seemed to regard negative numbers (which are also rational numbers and can be integers) as absurd and unreal. Present-day number theorists have no such discomfort with negative numbers, but they continue to be as fascinated by integers as Diophantus was. "Integers are very special," says Ramakrishnan. "They are kind of like musical notes on a clavier. If you change a note even slightly, you'll hear a dissonance. In a sense, integers can be thought of as the well-tempered states of mathematics. They are quite beautiful."

Diophantus was especially interested in integer solutions for homogeneous polynomial equations: those in which each term of the equation has the same degree (for example, x7 + y7 = z7 or x2y3z = w6). The classic example of a homogeneous polynomial equation is the Pythagorean theorem—x2 + y2 = z2—which defines the hypotenuse, z, the longest side of a right triangle, with respect to the perpendicular sides x and y. As early as 1600 BCE, the ancient Babylonians knew that there were many integer solutions to this equation (beginning with 32 + 42 = 52), though it was Pythagoras, a Greek mathematician living in the sixth century BCE, who gave his name to the formula, and Euclid who two centuries later proved that this equation has an infinite number of positive integer solutions, known as "Pythagorean triples" (such as 3, 4, 5; 5, 12, 13; or 39, 80, 89).

In 1637, French mathematician Pierre de Fermat famously wrote in the margin of Diophantus's Arithmetica that he had a "truly marvelous proof" showing that although there were an infinite number of positive integer solutions for x2 + y2 = z2, there were no positive integer solutions at all when the variables were raised to the power of three or higher (x3 + y3 = z3; x4 + y4 = z4 ; . . . ; xn + yn = zn). Fermat did not provide the actual proof; he claimed that the margin of Diophantus's book was too small to contain it. Fermat's conjecture (it was not yet a proof, though Fermat apparently believed he had one in his mind) remained unsolved until the early 1990s, when British mathematician Andrew Wiles created a complicated and unexpected proof that made use of previously unrelated mathematical principles.

In geometric terms, Fermat's conjecture and Wiles's proof, with their three variables, operate in three-dimensional space and can be described as points on a curve on the projective plane, drawn with x, y, z coordinates up to scaling. By moving to a greater number of variables, Ramakrishnan and Dimitrov are interested in identifying points on so-called hyperbolic surfaces. A hyperbolic surface is a negatively curved space, like a saddle—as opposed to a positively curved space like a sphere—in which the rules of Euclidean geometry no longer apply. A simple example of a hyperbolic surface is given by the simultaneous solution (where the values of the variables are held constant) of three equations: x15 + y5 = z5; x25 + w5 = z5; and x35 + w5 = y5. In the 1980s, German mathematician Gerd Faltings did pioneering work on the mathematics of hyperbolic curves, work that inspired Ramakrishnan and Dimitrov.

Ramakrishnan and Dimitrov's recent finding considers rational-number solutions for several systems of homogeneous polynomial equations describing hyperbolic surfaces. One solution is to set all the variables to zero. This solution is considered trivial; but are there any nontrivial solutions?

There are at least a few nontrivial solutions that Ramakrishnan and Dimitrov use as examples. Their challenge was to determine if there are finitely many or infinitely many rational solutions. They demonstrated—in a proof-by-contradiction that took nearly two years to complete—that the hyperbolic case they consider has only a finite number of solutions.

But, as Ramakrishnan remarks, there is no rest for number theorists, because "even if we solve another bunch of equations, there are still many more that are unsolved, enough for our descendants five hundred years from now."

For Ramakrishnan, this is not a counsel of despair. He continues to find mathematics exciting, especially the concept of the mathematical proof. As he points out, "In other ancient civilizations in the Middle East or India or China, they did some very complicated math, but it was more algorithmic, more related to computer science in my opinion than to philosophy. Whereas the Greeks emphasized proofs, rigorously establishing mathematical truths. There's nothing vague about it."

Apart from the inherent joy of pushing number theory forward through another generation, Ramakrishnan points out that this field has interesting applications in both science and everyday life. "Quite often in science, you are counting. Think of balancing chemical equations such as wCH4 + xO2 —> yCO2 + zH2O, in which methane oxidizes to produce carbon dioxide and water. This is a linear Diophantine equation."

Number theory also plays an important role in encryption. "Every time one visits a website with an https:// address," says Ramakrishnan, "it is likely that the website browser is using an encryption system that validates the certificate for the remote server to which one is trying to connect. The security keys that are exchanged point to a number-theoretic solution. Most people prefer equations with simple solutions, but in some situations, such as encryption, you actually want integer equations that are hard to solve without the key. This is where number theory comes in."

Ramakrishnan and Dimitrov's paper, "Compact arithmetic quotients of the complex 2-ball and a conjecture of Lang," is posted on the math arXiv, a Cornell University Library open e-print archive for papers in physics, mathematics, computer science, quantitative biology, and quantitative finance and statistics.

Cynthia Eller
Exclude from News Hub: 
News Type: 
Research News

Caltech Researchers Discover the Seat of Sex and Violence in the Brain

As reported in a paper published online today in the journal Nature, Caltech biologist David J. Anderson and his colleagues have genetically identified neurons that control aggressive behavior in the mouse hypothalamus, a structure that lies deep in the brain (orange circle in the image at right). Researchers have long known that innate social behaviors like mating and aggression are closely related, but the specific neurons in the brain that control these behaviors had not been identified until now.

The interdisciplinary team of graduate students and postdocs, led by Caltech senior research fellow Hyosang Lee, found that if these neurons are strongly activated by pulses of light, using a method called optogenetics, a male mouse will attack another male or even a female. However, weaker activation of the same neurons will trigger sniffing and mounting: mating behaviors. In fact, the researchers could switch the behavior of a single animal from mounting to attack by gradually increasing the strength of neuronal stimulation during a social encounter (inhibiting the neurons, in contrast, stops these behaviors dead in their tracks).

These results suggest that the level of activity within the population of neurons may control the decision between mating and fighting.  

The neurons initially were identified because they express a protein receptor for the hormone estrogen, reinforcing the view that estrogen plays an important role in the control of male aggression, contrary to popular opinion. Because the human brain contains a hypothalamus that is structurally similar to that in the mouse, these results may be relevant to human behavior as well.

The results of the study were published in journal Nature on April 16. David J. Anderson is the Seymour Benzer Professor of Biology and an investigator with the Howard Hughes Medical Institute.

Katie Neith
Exclude from News Hub: 
News Type: 
Research News

For Cells, Internal Stress Leads to Unique Shapes

From far away, the top of a leaf looks like one seamless surface; however, up close, that smooth exterior is actually made up of a patchwork of cells in a variety of shapes and sizes. Interested in how these cells individually take on their own unique forms, Caltech biologist Elliot Meyerowitz, postdoctoral scholar Arun Sampathkumar, and colleagues sought to pinpoint the shape-controlling factors in pavement cells, which are puzzle-piece-shaped epithelial cells found on the leaves of flowering plants. They found that these unusual shapes were the cell's response to mechanical stress on the microtubule cytoskeleton—protein tubes that act as a scaffolding inside the cells. These microtubules guide oriented deposition of cell-wall components, thus providing structural support.

The researchers studied this supportive microtubule arrangement in the tissue of pavement cells from the first leaves—or cotyledons—of a young Arabidopsis thaliana plant (right). By fluorescently marking the cells' microtubules (yellow, top surface of cell; purple, bottom surface of cell), the researchers could image the cell's structural arrangement—and watch how this arrangement changed over time. They could also watch the microtubule modifications that occurred due to changes in the mechanical forces experienced by the cells.

Microtubules strengthen a cell's structure by lining up in the direction of stress or pressure experienced by the cell and guiding the deposition of new cell-wall material, providing a supportive scaffold for the cell's shape. However, Meyerowitz and colleagues found that this internal stress is also influenced by the cell's shape. The result is a feedback loop: the cell's shape influences the microtubule arrangement; this arrangement, in turn, affects the cell's shape, which modulates the microtubules, and so on. Therefore, the unusual shape of the pavement cell represents a state of balance—an individual cell's tug-of-war to maintain structural integrity while also dynamically responding to the pushes and pulls of mechanical stress.

The results of the study were published in the journal eLife on April 16. Elliot Meyerowitz is George W. Beadle Professor of Biology and an investigator with the Howard Hughes Medical Institute.

Exclude from News Hub: 
News Type: 
Research News

Antennae Help Flies "Cruise" In Gusty Winds

Caltech researchers uncover a mechanism for how fruit flies regulate their flight speed, using both vision and wind-sensing information from their antennae.

Due to its well-studied genome and small size, the humble fruit fly has been used as a model to study hundreds of human health issues ranging from Alzheimer's to obesity. However, Michael Dickinson, Esther M. and Abe M. Zarem Professor of Bioengineering at Caltech, is more interested in the flies themselves—and how such tiny insects are capable of something we humans can only dream of: autonomous flight. In a report on a recent study that combined bursts of air, digital video cameras, and a variety of software and sensors, Dickinson and his team explain a mechanism for the insect's "cruise control" in flight—revealing a relationship between a fly's vision and its wind-sensing antennae.

The results were recently published in an early online edition of the Proceedings of the National Academy of Sciences.

Inspired by a previous experiment from the 1980s, Dickinson's former graduate student Sawyer Fuller (PhD '11) wanted to learn more about how fruit flies maintain their speed in flight. "In the old study, the researchers simulated natural wind for flies in a wind tunnel and found that flies maintain the same groundspeed—even in a steady wind," Fuller says.

Because the previous experiment had only examined the flies' cruise control in gentle steady winds, Fuller decided to test the limits of the insect's abilities by delivering powerful blasts of air from an air piston in a wind tunnel. The brief gusts—which reached about half a meter per second and moved through the tunnel at the speed of sound—were meant to probe how the fly copes if the wind is rapidly changing.

The flies' response to this dynamic stimulus was then tracked automatically by a set of five digital video cameras that recorded the fly's position from five different perspectives. A host of computers then combined information from the cameras and instantly determined the fly's trajectory and acceleration.

To their surprise, the Caltech team found that the flies in their experiments, unlike those in the previous studies, accelerated when the wind was pushing them from behind and decelerated when flying into a headwind. In both cases the flies eventually recovered to maintain their original groundspeed, but the initial response was puzzling, Fuller says. "This response was basically the opposite of what the fly would need to do to maintain a consistent groundspeed in the wind," he says.

In the past, researchers assumed that flies—like humans and most other animals—used their vision to measure their speed in wind, accelerating and decelerating their flight based on the groundspeed their vision detected. But Fuller and his colleagues were also curious about the in-flight role of the fly's wind-sensing organs: the antennae.

Using the fly's initial response to strong wind gusts as a marker, the researchers tested the response of each sensory mode individually. To investigate the role of wind sensation on the fly's cruise control, they delivered strong gusts of wind to normal flies, as well as flies whose antennae had been removed. The flies without antenna still increased their speed in the same direction as the wind gust, but they only accelerated about half as much as the flies whose antennae were still intact. In addition, the flies without antennae were unable to maintain a constant speed, dramatically alternating between acceleration and deceleration. Together, these results suggested that the antennae were indeed providing wind information that was important for speed regulation.

In order to test the response of the eyes separately from that of the antennae, Fuller and his colleagues projected an animation on the walls of the fly-tracking arena that would trick the eyes into thinking there was no speed increase, even though the antenna could feel the increased windspeed. When the researchers delivered strong headwinds to flies in this environment, the flies decelerated and were unable to recover to their original speed.

"We know that vision is important for flying insects, and we know that flies have one of the fastest visual systems on the planet," Dickinson says, "But this response showed us that as fast as their vision is, if they're flying too fast or the wind is blowing them around too quickly, their visual system reaches its limit and the world starts getting blurry." That is when the antennae kick in, he says.

The results suggest that the antennae are responsible for quickly sensing changes in windspeed—and therefore are responsible for the fly's initial deceleration in a headwind. The information received from the fly's eyes—which is processed much more slowly than information from the wind sensors on the antenna—is responsible for helping the fly regain its cruising speed.

"Sawyer's study showed that the fly can take another sensor—this little tiny antenna, which doesn't require nearly the amount of processing area within the brain as the eyes—and the fly is able to use that information to compensate for the fact that the information coming out of the eyes is a bit delayed," Dickinson says. "It's kind of a neat trick, using a cheap little sensor to compensate for the limitations of a big, heavy, expensive sensor."

Beyond learning more about the fly's wind-sensing capabilities, Fuller says that this information will also help engineers design small flying robots—creating a sort of man-made fly. "Tiny flying robots will take a lot of inspiration from flies. Like flies, they will probably have to rely heavily on vision to regulate groundspeed," he says.

"A challenge here is that vision typically takes a lot of computation to get right, just like in flies, but it's impossible to carry a powerful processor to do that quickly on a tiny robot. So they'll instead carry tiny cameras and do the visual processing on a tiny processor, but it will just take longer. Our results suggest that little flying vehicles would also do well to have fast wind sensors to compensate for this delay."

The work was published in a study titled "Flying Drosophila stabilize their vision-based velocity controller by sensing wind with their antennae." Other coauthors include former Caltech senior postdoc Andrew D. Straw, Martin Y. Peek (BS '06), and Richard Murray, Thomas E. and Doris Everhart Professor of Control and Dynamical Systems and Bioengineering at Caltech, who coadvised Fuller's graduate work. The study was supported by the Institute for Collaborative Biotechnologies through funding from the U.S. Army Research Office and by a National Science Foundation Graduate Fellowship.

Exclude from News Hub: 

A New Tool for Unscrambling the Rock Record

Caltech-developed technique shows sulfur reducers were at work on the early Earth

A lot can happen to a rock over the course of two and a half billion years. It can get buried and heated; fluids remove some of its minerals and precipitate others; its chemistry changes. So if you want to use that rock to learn about the conditions on the early Earth, you have to do some geologic sleuthing: You have to figure out which parts of the rock are original and which came later. That is a tricky task, but now a team of Caltech researchers has developed and applied a unique technique that removes much of the guesswork.

"We want to know what Earth looked like when these ancient rocks were deposited. That's a giant challenge because a number of processes have scrambled and erased the original history," says Woodward Fischer, an assistant professor of geobiology at Caltech. "This is a first big effort to try to wrestle with that."

Fischer is the lead author on a paper that describes the new technique and findings in the current issue of the Proceedings of the National Academy of Sciences.

Using the new method, Fischer and his colleagues have examined ancient rocks dating to an age before the rise of oxygen. Today, water feeds the biosphere, providing the electrons needed to support life. But before the evolution of photosynthesis and the accumulation of oxygen in the atmosphere, elements such as iron and sulfur were the source of electrons. Researchers interested in the early Earth would like to determine how and when life figured out how to use these elements. The Caltech team has identified clear evidence that 2.5 billion years ago, sulfate-reducing microbes were already at work.

The researchers studied drill core samples collected in South Africa from sedimentary rocks that are slightly older than 2.5 billion years old. They focused on small features within the rocks, called nodules, made of the mineral pyrite. Also known as fool's gold, pyrite can be made in a number of ways, including as a product of respiratory metabolism: sulfate-reducing microbes reduce sulfate, which is present in seawater, yielding hydrogen sulfide, and when that hydrogen sulfide mingles with iron, pyrite is produced.

Today, sulfate-reducing microbes are often found in anoxic environments such as marine sediments where the oxygen has been consumed by aerobes but where there is still plenty of organic matter. It is logical, then, to suspect that these microbes would have been important players on the early Earth, when oxygen was scarce. Comparative genomics studies of sulfate reducers that are living today also suggest that these microbes should have been present 2.5 billion years ago. But this has been difficult to confirm in the rock record.

From current studies, scientists know that sulfate reducers metabolize the various stable isotopes of sulfur in a predictable way: producing light sulfur isotopes first before moving on to produce heavier ones as they run out of substrate. This provides a chemical thumbprint that researchers can look for as they examine pyrite nodules. The nodules crystalize early within the sediments, with the material at their core forming before the material at their edges. Therefore, to check whether sulfur-reducing organisms were active when a particular pyrite-containing rock formed, a geobiologist should be able to measure the ratios of a nodule's sulfur isotopes at different points—both near the core and closer to the edges—to see how those ratios changed as the nodule grew. But the nodules are only about a millimeter in diameter, so researchers have not been able to collect the fine-grained measurements they need in order to identify the isotopic thumbprint. Instead, they often grind up an entire rock sample, measure its isotopic composition, and then compare it to another rock.

Muddying the interpretation even more, these ancient rocks have all been deeply complicated by the wrinkles of time. All of the events and circumstances that have affected them since their deposition have left their chemical marks, by carving away old materials and precipitating new ones. A geologist can use some of the textures—the marks left in the fabric of the rock—to unravel some of a rock's history, but only if those textures clearly crosscut or overlap one another. Some of the visual cues can also be misleading. So it can be difficult just to identify which parts of a rock are original and can therefore provide insight about the early Earth.

Fischer's new technique changes all that. It allows researchers to untangle a rock's history and to then zoom in and measure the isotopic ratios at a number of points within a single pyrite nodule.

He begins as any geologist would—by looking at a sample with light and electron microscopy to identify the different textures within the rock. Doing that, he might identify a number of pyrite nodules that "look good"—that appear to date to the rock's original deposition.

He then uses a technique called scanning SQUID (superconducting quantum interference device) microscopy, which uses a quantum detector to produce a magnetic map of the sample at a very small scale. Pyrite itself is not magnetic, but when it is later altered, it forms a mineral called pyrrhotite, which is magnetic. Using scanning SQUID microscopy, Fischer has been able to rule out a number of nodules that had appeared to be original but that were in fact magnetic, meaning that they included pyrrhotite. In his South African samples, those deceptive features dated to a volcanic event 500 million years after the rocks were deposited, which sent chemistry-altering fluids through all the layers of sediment and rock that were present at the time.

"If you weren't using this technique, you'd miss the later alteration," Fischer says. "Those textures looked good. They would have passed naive tests."

The final step in the process is to measure the isotopic composition of the nodules using an analytical method called secondary ion mass spectrometry (SIMS). This specialized technique is used to measure the chemistry of thin films and solids with very fine spatial resolution. Materials scientists use it to analyze silicon wafers, for example, and planetary scientists have used it to study bits of rock from the moon. Fischer's group is one of the few in the world that uses it to study ancient rocks.

In SIMS, a sample under very strong vacuum is bombarded with a beam of cesium ions, which displaces ions from the surface of the sample. A mass spectrometer can measure those so-called secondary ions, providing a count of the sample's sulfur isotopes. Since the beam can be focused very precisely, the method allows researchers to sample many points within a single nodule, measuring a 13 x 5 grid within a millimeter, for example. The product is essentially a map of the sample's isotopic composition.

"It's one thing to say, 'Wow, rocks are really complicated. There's just going to be information lost.' It's another thing to be able to go back in and say, 'I know how to piece together the history of this rock and learn something about the early Earth that I didn't know previously.'"

Using the new technique, Fischer and his colleagues were able to identify which parts of their drill core samples were truly ancient and to then measure the sulfur isotopic composition of those nodules as they grew. And indeed they found the isotopic signature expected as a result of the activity of sulfur-reducing microbes.

"This work supports the hypothesis that microbial sulfate reduction was an important metabolism in organic-rich environments on the early Earth," Fischer says. "What's more, we now know how we can ask better questions about ancient rocks. That, for me, is incredibly exciting."

The paper is titled "SQUID-SIMS is a useful approach to uncover primary signals in the Archean sulfur cycle." Along with Fischer, additional Caltech coauthors are John Eiler, the Robert P. Sharp Professor of Geology and professor of geochemistry; Joseph Kirschvink, the Nico and Marilyn Van Wingen Professor of Geobiology; Jena Johnson, a graduate student in geobiology; and Yunbin Guan, director of the Center for Microanalysis. David Fike of Washington University in St. Louis and Timothy Raub of the University of St. Andrews are also coauthors. Scanning SQUID microscopy is a technique that was developed by researchers at Caltech and Vanderbilt University. The work was supported by the Agouron Institute and by a NASA Exobiology Award.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Gravity Measurements Confirm Subsurface Ocean on Enceladus

In 2005, NASA's Cassini spacecraft sent pictures back to Earth depicting an icy Saturnian moon spewing water vapor and ice from fractures, known as "tiger stripes," in its frozen surface. It was big news that tiny Enceladus—a mere 500 kilometers in diameter—was such an active place. Since then, scientists have hypothesized that a large reservoir of water lies beneath that icy surface, possibly fueling the plumes. Now, using gravity measurements collected by Cassini, scientists have confirmed that Enceladus does in fact harbor a large subsurface ocean near its south pole, beneath those tiger stripes.

"For the first time, we have used a geophysical method to determine the internal structure of Enceladus, and the data suggest that indeed there is a large, possibly regional ocean about 50 kilometers below the surface of the south pole," says David Stevenson, the Marvin L. Goldberger Professor of Planetary Science at Caltech and an expert in studies of the interior of planetary bodies. "This then provides one possible story to explain why water is gushing out of these fractures we see at the south pole."

Stevenson is one of the authors on a paper that describes the finding in the current issue of the journal Science. Luciano Iess of Sapienza University of Rome is the paper's lead author.

During three flybys of Enceladus, between April 2010 and May 2012, the scientists collected extremely precise measurements of Cassini's trajectory by tracking the spacecraft's microwave carrier signal with NASA's Deep Space Network. The gravitational tug of a planetary body, such as Enceladus, alters a spacecraft's flight path ever so slightly. By measuring the effect of such deflections on the frequency of Cassini's signal as the orbiter traveled past Enceladus, the scientists were able to learn about the moon's gravitational field. This, in turn, revealed details about the distribution of mass within the moon.

"This is really the only way to learn about internal structure from remote sensing," Stevenson says. In fact, more precise measurements would require the placement of seismometers on Enceladus's surface—something that is certainly not going to happen anytime soon.

The key feature in the gravity data was a so-called negative mass anomaly at Enceladus's south pole. Put simply, such an anomaly exists when there is less mass in a particular location than would be expected in the case of a uniform spherical body. Since there is a known depression in the surface of Enceladus's south pole, the scientists expected to find a negative mass anomaly. However, the anomaly was quite a bit smaller than would be predicted by the depression alone.

"So, you say, 'Aha! This is compensated at depth,'" Stevenson says.

Such compensation for mass is commonly found on planetary bodies, including on Earth. In some cases, the absence of material at the surface is compensated at depth by the presence of denser material. In other cases, the presence of extra material at the surface is compensated by the existence of less dense material at depth. In fact, when the first gravity measurements were made in India, people were struck by the fact that Mount Everest did not seem to produce much of an effect. Today we know that, like most mountains on Earth, Mount Everest is compensated by a low-density root that extends many tens of kilometers below the surface. In other words, the material protruding above the surface is compensated by a reduction of density at depth.

In the case of Enceladus, the opposite is true. The absence of material at the surface is compensated at depth by the presence of material that is denser than ice. "The only sensible candidate for that material is water," Stevenson says. "So if I have this depression at the south pole, and I have beneath the surface 50 kilometers down a layer of water or an ocean, that layer of water at depth is a positive mass anomaly. Together the two anomalies account for our measurements."

Although no one can say for certain whether the subsurface ocean supplies the water that has been seen spraying out of the tiger stripes on Enceladus's surface, the scientists say that it is possible. The suspicion is that the fractures—in some way that is not yet fully understood—connect down to a part of the moon that is being tidally heated by the globe's repeated flexing as it traces its eccentric orbit. "Presumably the tidal heating is also replenishing the ocean," Stevenson says, "so it is possible that some of that water is making its way up through the tiger stripes."

The paper is titled "The Gravity Field and Interior Structure of Enceladus." Additional coauthors are Marzia Parisi, Douglas Hemingway, Robert A. Jacobson, Jonathan I. Lunine, Francis Nimmo, John W. Armstrong, Sami W. Asmar, Maria Ducci, and Paolo Tortora. The work was supported by the Italian Space Agency and by NASA through the Cassini project. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency, and the Italian Space Agency. The Jet Propulsion Laboratory manages the mission for NASA's Science Mission Directorate.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Quantum Photon Properties Revealed in Another Particle—the Plasmon

For years, researchers have been interested in developing quantum computers—the theoretical next generation of technology that will outperform conventional computers. Instead of holding data in bits, the digital units used by computers today, quantum computers store information in units called "qubits." One approach for computing with qubits relies on the creation of two single photons that interfere with one another in a device called a waveguide. Results from a recent applied science study at Caltech support the idea that waveguides coupled with another quantum particle—the surface plasmon—could also become an important piece of the quantum computing puzzle.

The work was published in the print version of the journal Nature Photonics the week of March 31.

As their name suggests, surface plasmons exist on a surface—in this case the surface of a metal, at the point where the metal meets the air. Metals are conductive materials, which means that electrons within the metal are free to move around. On the surface of the metal, these free electrons move together, in a collective motion, creating waves of electrons. Plasmons—the quantum particles of these coordinated waves—are akin to photons, the quantum particles of light (and all other forms of electromagnetic radiation).

"If you imagine the surface of a metal is like a sea of electrons, then surface plasmons are the ripples or waves on this sea," says graduate student Jim Fakonas, first author on the study.

These waves are especially interesting because they oscillate at optical frequencies. Therefore, if you shine a light at the metal surface, you can launch one of these plasmon waves, pushing the ripples of electrons across the surface of the metal. Because these plasmons directly couple with light, researchers have used them in photovoltaic cells and other applications for solar energy. In the future, they may also hold promise for applications in quantum computing.

However, the plasmon's odd behavior, which falls somewhere between that of an electron and that of a photon, makes it difficult to characterize. "According to quantum theory, it should be possible to analyze these plasmonic waves using quantum mechanics"—the physics that governs the behavior of matter and light at the atomic and subatomic scale—"in the same way that we can use it to study electromagnetic waves, like light," Fakonas says. However, in the past, researchers were lacking the experimental evidence to support this theory.

To find that evidence, Fakonas and his colleagues in the laboratory of Harry Atwater, Howard Hughes Professor of Applied Physics and Materials Science, looked at one particular phenomenon observed of photons—quantum interference—to see if plasmons also exhibit this effect.

The applied scientists borrowed their experimental technique from a classic test of quantum interference in which two single, identical photons are launched at one another through opposite sides of a 50/50 beam splitter, a device that acts as an imperfect mirror, reflecting half of the light that reaches its surface while allowing the the other half of the light to pass through. If quantum interference is observed, both identical photons must emerge together on the same side of the beam splitter, with their presence confirmed by photon detectors on both sides of the mirror.

Since plasmons are not exactly like photons, they cannot be used in mirrored optical beam splitters. Therefore, to test for quantum interference in plasmons, Fakonas and his colleagues made two waveguide paths for the plasmons on the surface of a tiny silicon chip. Because plasmons are very lossy—that is, easily absorbed into materials that surround them—the path is kept short, contained within a 10-micron-square chip, which reduces absorption along the way.

The waveguides, which together form a device called a directional coupler, act as a functional equivalent to a 50/50 beam splitter, directing the paths of the two plasmons to interfere with one another. The plasmons can exit the waveguides at one of two output paths that are each observed by a detector; if both plasmons exit the directional coupler together—meaning that quantum interference is observed—the pair of plasmons will only set off one of the two detectors.

Indeed, the experiment confirmed that two indistinguishable photons can be converted into two indistinguishable surface plasmons that, like photons, display quantum interference.

This finding could be important for the development of quantum computing, says Atwater. "Remarkably, plasmons are coherent enough to exhibit quantum interference in waveguides," he says. "These plasmon waveguides can be integrated in compact chip-based devices and circuits, which may one day enable computation and measurement schemes based on quantum interference."

Before this experiment, some researchers wondered if the photon–metal interaction necessary to create a surface plasmon would prevent the plasmons from exhibiting quantum interference. "Our experiment shows this is not a concern," Fakonas says.

"We learned something new about the quantum mechanics of surface plasmons. The main thing is that we were able to validate the theoretical prediction; we showed that this type of interference is possible with plasmons, and we did a pretty clean measurement," he says. "The quantum interference displayed by plasmons appeared to be almost identical to that of photons, so I think it would be very difficult for someone to design a different structure that would improve upon this result."

The work was published in a paper titled "Two-plasmon quantum interference." In addition to Fakonas and Atwater, the other coauthors are Caltech undergraduate Hyunseok Lee and former undergraduate Yousif A. Kelaita (BS '12). The work was supported by funding from the Air Force Office of Scientific Research, and the waveguide was fabricated at the Kavli Nanoscience Institute at Caltech.

Exclude from News Hub: 
News Type: 
Research News

New Method Could Improve Ultrasound Imaging

Caltech chemical engineer shows hidden potential of gas vesicles

One day while casually reading a review article, Caltech chemical engineer Mikhail Shapiro came across a mention of gas vesicles—tiny gas-filled structures used by some photosynthetic microorganisms to control buoyancy. It was a light-bulb moment. Shapiro is always on the lookout for new ways to enhance imaging techniques such as ultrasound or MRI, and the natural nanostructures seemed to be just the ticket to improve ultrasound imaging agents.

Now Shapiro and his colleagues from UC Berkeley and the University of Toronto have shown that these gas vesicles, isolated from bacteria and from archaea (a separate lineage of single-celled organisms), can indeed be used for ultrasound imaging. The vesicles could one day help track and reveal the growth, migration, and activity of a variety of cell types—from neurons to tumor cells—using noninvasive ultrasound, one of the most widely used imaging modalities in biomedicine.

A paper describing the work appears as an advance online publication in the journal Nature Nanotechnology

"People have struggled to make synthetic nanoscale imaging agents for ultrasound for many years," says Shapiro. "To me, it's quite amazing that we can borrow something that nature has evolved for a completely different purpose and use it for in vivo ultrasound imaging. It shows just how much nature has to offer us as engineers."

Ultrasound transmitters use sound waves to image biological tissue. When the emitted waves encounter something of a different density or stiffness, such as bone, some of the sound bounces back to the transducer. By measuring how long that round-trip journey takes, the system can determine how deep the object is and build up a picture of internal anatomy.

But what if you want to image something other than anatomy? Maybe you are interested in blood flow and want to see whether there are any signs of atherosclerosis, for example, in blood vessels. To make ultrasound useful in such cases, you need to introduce an imaging label that has a different density or stiffness from bodily tissue. Currently, people use microbubbles—small synthetic bubbles of gas with a lipid or protein shell—to image the vasculature. These microbubbles are less dense and more elastic than the water-based tissues of the body. As a result, they resonate and scatter sound waves, allowing ultrasound to visualize the location of the microbubbles.

Microbubbles work just fine, unless you want to image something outside the bloodstream. Because of their diameter—small, but still on the order of microns—the bubbles are too large to get out of the bloodstream and into surrounding tissue. And as Shapiro says, "Many interesting targets—such as specific types of tumors, immune cells, stem cells, or neurons—are outside the bloodstream."

A number of research teams have tried, without success, to make microbubbles smaller. There is a fundamental physical reason for their failure: bubbles are held together by surface tension. As you make them smaller, the surface tension builds, and the pressure within the bubble becomes too high in comparison to the pressure outside. That amounts to an unstable bubble that is likely to lose its gas to its surroundings.

The gas vesicles Shapiro's team worked with are at least an order of magnitude smaller than microbubbles—measuring just tens to hundreds of nanometers in diameter. And even though they look like bubbles, gas vesicles behave quite differently. Unlike bubbles, the vesicles do not trap gas molecules but allow them to pass freely in and out. Instead, they exclude water from their interior by having a hydrophobic inner surface. This results in a fundamentally stable nanoscale configuration.

 "As soon as I learned about them, I knew we had to try them," Shapiro says. 

The researchers first isolated gas vesicles from the bacterium Anabaena flos-aquae (Ana) and the archaeon Halobacterium NRC-1 (Halo), put them in an agarose gel, and used a home-built ultrasound system to image them. Vesicles from both sources produced clear ultrasound signals. Next, they injected the gas vesicles into mice and were able to follow the vesicles from the initial injection site to the liver, where blood flows to be detoxified. Shapiro and his colleagues were also able to easily attach biomolecules to the surface of the gas vesicles, suggesting that the gas vesicles could be used to label targets outside the bloodstream.

Shapiro's long-term goal is to take advantage of the fact that the gas vesicles are genetically encoded by engineering their properties at the DNA level and ultimately introducing the genes into mammalian cells to produce the structures themselves. For example, he would like to genetically label stem cells and use ultrasound to watch as they migrate to specific locations within the body and differentiate into tissues.

"Now that we have our hands on the genes that encode these gas vesicles, we can engineer them to optimize their properties, to see how far they can go," Shapiro says.

In their work, the researchers found differences in the gas vesicles produced by Ana and Halo. These variations could provide insight into how the vesicle design could be optimized for other purposes. For example, unlike the Ana vesicles, the Halo vesicles produced harmonic signals—meaning that they caused the original ultrasound wave to come back, as well as waves with doubled and tripled frequencies. Harmonics can be helpful in imaging because most tissue does not produce such signals; so when they show up, researchers know that they are more likely to be coming from the imaging agent than from the tissue.

Also, the gas vesicles from the two species collapsed, and thereby became invisible to ultrasound, with the application of different levels of pressure. Halo gas vesicles, which evolved in unpressurized cells, collapsed more easily than the vesicles from Ana, which maintain a pressurized cytoplasm. The researchers used this fact to distinguish the two different populations in a mixed sample. By applying a pressure pulse sufficient to collapse only the Halo vesicles, they were able to identify the remaining gas vesicles as having come from Ana.

Shapiro notes that there is a substantial difference between the critical collapse pressures of Halo and Ana. "There's quite a good possibility that, as we start to genetically engineer these nanostructures, we would be able to make new ones with intermediate collapse pressures," he says. "That would allow you to image a greater number of cells at the same time. This sort of multiplexing is done all the time in fluorescent imaging, and now we want to do it with ultrasound."

Along with Shapiro, coauthors on the paper, "Biogenic gas nanostructures as ultrasonic molecular reporters," are Patrick Goodwill, Arkosnato Neogy, David Schaffer, and Steven Conolly of UC Berkeley, and Melissa Yin and F. Stuart Foster of the University of Toronto. The work was supported by funding from the Miller Research Institute, the Burroughs Wellcome Fund's Career Award at the Scientific Interface, the California Institute of Regenerative Medicine, the National Institutes of Health, the Canadian Institutes of Health Research, and the Terry Fox Foundation.

Kimm Fesenmaier
Home Page Title: 
New Method Could Improve Ultrasound Imaging
Exclude from News Hub: 
News Type: 
Research News
Exclude from Home Page: 

BICEP2 Discovers First Direct Evidence of Inflation and Primordial Gravitational Waves

Astronomers announced today that they have acquired the first direct evidence that gravitational waves rippled through our infant universe during an explosive period of growth called inflation. This is the strongest confirmation yet of cosmic inflation theories, which say the universe expanded by 100 trillion trillion times in less than the blink of an eye.

"The implications for this detection stagger the mind," says Jamie Bock, professor of physics at Caltech, laboratory senior research scientist at the Jet Propulsion Laboratory (JPL) and project co-leader. "We are measuring a signal that comes from the dawn of time."

Our universe burst into existence in an event known as the Big Bang 13.8 billion years ago. Fractions of a second later, space itself ripped apart, expanding exponentially in an episode known as inflation. Telltale signs of this early chapter in our universe's history are imprinted in the skies in a relic glow called the cosmic microwave background. Tiny fluctuations in this afterglow provide clues to conditions in the early universe.

Small, quantum fluctuations were amplified to enormous sizes by the inflationary expansion of the universe. This process created density waves that make small differences in temperature across the sky where the universe was denser, eventually condensing into galaxies and clusters of galaxies. But as theorized, inflation should also produce gravitational waves, ripples in space-time propagating throughout the universe. Observations from the BICEP2 telescope at the South Pole now demonstrate that gravitational waves were created in abundance during the early inflation of the universe.

On Earth, light can become polarized by scattering off surfaces, such as a car or pond, causing the glare that polarized sunglasses are designed to reduce. In space, the radiation of the cosmic microwave background, influenced by the squeezing of gravitational waves, was scattered by electrons, and became polarized, too.

Because gravitational waves have a "handedness"—they can have both left- and right-handed polarizations—they leave behind a characteristic pattern of polarization on the cosmic microwave background known as B-mode polarization. "The swirly B-mode pattern of polarization is a unique signature of gravitational waves," says collaboration co-leader Chao-Lin Kuo of Stanford University and the SLAC National Accelerator Laboratory. This is the first direct image of gravitational waves across the primordial sky."

In order to detect this B-mode polarization, the team examined spatial scales on the sky spanning about one to five degrees (two to 10 times the width of the full moon), which allowed them to gather photons from a broad swath of the cosmic microwave background in an area of the sky where we can see clearly through our own Milky Way galaxy. To do this, the team traveled to the South Pole to take advantage of the cold, dry, stable air. "The South Pole is the closest you can get to space and still be on the ground," says John Kovac of the Harvard-Smithsonian Center for Astrophysics, project co-leader and BICEP2 principal investigator. "It's one of the driest and clearest locations on Earth, perfect for observing the faint microwaves from the Big Bang."

The team also invented completely new technology for making these measurements. "Our approach was like building a camera on a printed circuit board," says Bock. "The circuit board included an antenna to focus and filter polarized light, a micro-machined detector that turns the radiation into heat, and a superconducting thermometer to measure this heat." The detector arrays were made at JPL's Microdevices Laboratory.

The BICEP2 team was surprised to detect a B-mode polarization signal considerably stronger than many cosmologists expected. The team analyzed the data for more than three years in an effort to rule out any errors. They also considered whether dust in our galaxy could produce the observed pattern, but the data suggest this is highly unlikely. "This has been like looking for a needle in a haystack, but instead we found a crowbar," says project co-leader Clem Pryke, of the University of Minnesota.

The prediction that the cosmic microwave background would show a B-mode polarization from gravitational waves produced during the inflationary period was made in 1996 by several theoretical physicists including Marc Kamionkowski, who was a member of the Caltech faculty from 1999 to 2011, and is now on the faculty at Johns Hopkins University. Kamionkowski says this discovery "is powerful evidence for inflation. I'd call it a smoking gun. We've now learned that gravitational waves are abundant, and can learn more about the process that powered inflation. This is a remarkable advance in cosmology."

The BICEP project originated at Caltech in 2002 as a collaboration between Bock and the late physicist Andrew Lange.

BICEP2 is the second stage of a coordinated program with the BICEP and Keck Array experiments, which has a co-PI structure. The four principal investigators are Bock, Kovac, Kuo, and Pryke. All have worked together on the present result, along with talented teams of students and scientists. Other major collaborating institutions for BICEP2 include the University of California at San Diego, the University of British Columbia, the National Institute of Standards and Technology, the University of Toronto, Cardiff University, and Commissariat à l'energie atomique.

BICEP2 is funded by the National Science Foundation. NSF also runs the South Pole Station where BICEP2 and the other telescopes used in this work are located. The W. M. Keck Foundation also contributed major funding for the construction of the team's telescopes. NASA, JPL, and the Gordon and Betty Moore Foundation generously supported the development of the ultrasensitive detector arrays that made these measurements possible.

There are two papers, published March 17, 2014, reporting these results: "BICEP2 I: Detection of B-mode polarization at degree angular scales" and "BICEP2 II: Experiment and Three-Year Data Set."

The journal papers, along with additional technical details, can be found on the BICEP2 release website.

Exclude from News Hub: 
News Type: 
Research News