Caltech Researchers Synthesize Catalyst Important In Nitrogen Fixation

Inspired by an enzyme in soil microorganisms, researchers develop first synthetic iron-based catalyst for the conversion of nitrogen to ammonia.

As farming strategies have evolved to provide food for the world's growing population, the manufacture of nitrogen fertilizers through the conversion of atmospheric nitrogen to ammonia has taken on increased importance.

The industrial technique used to make these fertilizers employs a chemical reaction that mirrors that of a natural process—nitrogen fixation. Unfortunately, vast amounts of energy, in the form of high heat and pressure, are required to drive the reaction. Now, inspired by the natural processes that take place in nitrogen-fixing microorganisms, researchers at Caltech have synthesized an iron-based catalyst that allows for nitrogen fixation under much milder conditions.

In the early 20th century, scientists discovered a way to artificially produce ammonia for the manufacture of commercial fertilizers, through a nitrogen fixation technique called the Haber-Bosch process. Today, this process is used industrially to produce more than 130 million tons of ammonia annually. Microorganisms in the soil that live near the roots of certain plants can produce a similar amount of ammonia each year—but instead of using high heat and pressure, they benefit from enzyme catalysts, called nitrogenases, that convert nitrogen from the air into ammonia at room temperature and atmospheric pressure.

In work described in the September 5 issue of Nature, Caltech graduate students John Anderson and Jon Rittle, under the supervision of their research adviser Jonas Peters, Bren Professor of Chemistry and executive officer for chemistry, have developed the first molecular iron complex that catalyzes nitrogen fixation, modeling the natural enzymes found in nitrogen-fixing soil organisms. The research may eventually lead to the development of more environmentally friendly methods of ammonia production.

Natural nitrogenase enzymes, which prime inert atmospheric nitrogen for fixation through the addition of electrons and protons, generally contain two metals, molybdenum and iron. Over decades of research, this duality has caused a number of debates about which metal was actually responsible for nitrogenase's catalytic activity. Since a few research groups had modest success in synthesizing molybdenum-based molecular catalysts, many in the field believed that the debate had been settled. The discovery by Peters' group that synthetic iron complexes are also capable of this type of catalytic activity will reopen the discussion.

This finding, along with a wealth of data from structural biologists, biochemists, and spectroscopists, suggests that it may be iron—and not molybdenum—that is the key player in the nitrogen fixation in natural enzymes. The iron catalyst discovered by Peters and his colleagues may also help unravel the mystery of how these enzymes perform this reaction at the molecular level.

"We've pursued this type of synthetic iron catalyst for about a decade, and have banged our heads against plenty of walls in the process. So have a lot of other very talented folks in my field, and some for much longer than a decade," Peters says.

The finding is a first for the field, but Peters says that their current iron-based catalyst has limitations—the Haber-Bosch process is still the industrial standard. "Now that we finally have an example that actually works, everyone wants to know: 'Can it be used to make ammonia more efficiently?' The simple answer, for now, is no. While we're delighted to finally have our hands on an iron fixation catalyst, it's pretty inefficient and dies quickly. But," he adds, "this catalyst is a really important advance for us; there is so much we will now be able to learn from it that we couldn't before."

Funding for the research outlined in the Nature paper, titled "Catalytic conversion of nitrogen to ammonia by an iron model complex," was provided by the National Institutes of Health and the Gordon and Betty Moore Foundation.

Frontpage Title: 
Researchers Synthesize Important Catalyst for Nitrogen Fixation
Exclude from News Hub: 

Made-to-Order Materials

Caltech engineers focus on the nano to create strong, lightweight materials

The lightweight skeletons of organisms such as sea sponges display a strength that far exceeds that of manmade products constructed from similar materials. Scientists have long suspected that the difference has to do with the hierarchical architecture of the biological materials—the way the silica-based skeletons are built up from different structural elements, some of which are measured on the scale of billionths of meters, or nanometers. Now engineers at the California Institute of Technology (Caltech) have mimicked such a structure by creating nanostructured, hollow ceramic scaffolds, and have found that the small building blocks, or unit cells, do indeed display remarkable strength and resistance to failure despite being more than 85 percent air.

"Inspired, in part, by hard biological materials and by earlier work by Toby Schaedler and a team from HRL Laboratories, Caltech, and UC Irvine on the fabrication of extremely lightweight microtrusses, we designed architectures with building blocks that are less than five microns long, meaning that they are not resolvable by the human eye," says Julia R. Greer, professor of materials science and mechanics at Caltech. "Constructing these architectures out of materials with nanometer dimensions has enabled us to decouple the materials' strength from their density and to fabricate so-called structural metamaterials which are very stiff yet extremely lightweight."

At the nanometer scale, solids have been shown to exhibit mechanical properties that differ substantially from those displayed by the same materials at larger scales. For example, Greer's group has shown previously that at the nanoscale, some metals are about 50 times stronger than usual, and some amorphous materials become ductile rather than brittle. "We are capitalizing on these size effects and using them to make real, three-dimensional structures," Greer says.

In an advance online publication of the journal Nature Materials, Greer and her students describe how the new structures were made and responded to applied forces.

The largest structure the team has fabricated thus far using the new method is a one-millimeter cube. Compression tests on the the entire structure indicate that not only the individual unit cells but also the complete architecture can be endowed with unusually high strength, depending on the material, which suggests that the general fabrication technique the researchers developed could be used to produce lightweight, mechanically robust small-scale components such as batteries, interfaces, catalysts, and implantable biomedical devices.

Greer says the work could fundamentally shift the way people think about the creation of materials. "With this approach, we can really start thinking about designing materials backward," she says. "I can start with a property and say that I want something that has this strength or this thermal conductivity, for example. Then I can design the optimal architecture with the optimal material at the relevant size and end up with the material I wanted."

The team first digitally designed a lattice structure featuring repeating octahedral unit cells—a design that mimics the type of periodic lattice structure seen in diatoms. Next, the researchers used a technique called two-photon lithography to turn that design into a three-dimensional polymer lattice. Then they uniformly coated that polymer lattice with thin layers of the ceramic material titanium nitride (TiN) and removed the polymer core, leaving a ceramic nanolattice. The lattice is constructed of hollow struts with walls no thicker than 75 nanometers.

"We are now able to design exactly the structure that we want to replicate and then process it in such a way that it's made out of almost any material class we'd like—for example, metals, ceramics, or semiconductors—at the right dimensions," Greer says.

In a second paper, scheduled for publication in the journal Advanced Engineering Materials, Greer's group demonstrates that similar nanostructured lattices could be made from gold rather than a ceramic. "Basically, once you've created the scaffold, you can use whatever technique will allow you to deposit a uniform layer of material on top of it," Greer says.

In the Nature Materials work, the team tested the individual octahedral cells of the final ceramic lattice and found that they had an unusually high tensile strength. Despite being repeatedly subjected to stress, the lattice cells did not break, whereas a much larger, solid piece of TiN would break at much lower stresses. Typical ceramics fail because of flaws—the imperfections, such as holes and voids, that they contain. "We believe the greater strength of these nanostructured materials comes from the fact that when samples become sufficiently small, their potential flaws also become very small, and the probability of finding a weak flaw within them becomes very low," Greer says. So although structural mechanics would predict that a cellular structure made of TiN would be weak because it has very thin walls, she says, "we can effectively trick this law by reducing the thickness or the size of the material and by tuning its microstructure, or atomic configurations."

Additional coauthors on the Nature Materials paper, "Fabrication and Deformation of Three-Dimensional Hollow Ceramic Nanostructures," are Dongchan Jang, who recently completed a postdoctoral fellowship in Greer's lab, Caltech graduate student Lucas Meza, and Frank Greer, formerly of the Jet Propulsion Laboratory (JPL). The work was supported by funding from the Dow-Resnick Innovation Fund at Caltech, DARPA's Materials with Controlled Microstructural Architecture program, and the Army Research Office through the Institute for Collaborative Biotechnologies at Caltech. Some of the work was carried out at JPL under a contract with NASA, and the Kavli Nanoscience Institute at Caltech provided support and infrastructure.

The lead author on the Advanced Engineering Materials paper, "Design and Fabrication of Hollow Rigid Nanolattices Via Two-Photon Lithography," is Caltech graduate student Lauren Montemayor. Meza is a coauthor. In addition to support from the Dow-Resnick Innovation Fund, this work received funding from an NSF Graduate Research Fellowship.

Kimm Fesenmaier
Exclude from News Hub: 

A Home for the Microbiome

Caltech biologists identify, for the first time, a mechanism by which beneficial bacteria reside and thrive in the gastrointestinal tract

The human body is full of tiny microorganisms—hundreds to thousands of species of bacteria collectively called the microbiome, which are believed to contribute to a healthy existence. The gastrointestinal (GI) tract—and the colon in particular—is home to the largest concentration and highest diversity of bacterial species. But how do these organisms persist and thrive in a system that is constantly in flux due to foods and fluids moving through it? A team led by California Institute of Technology (Caltech) biologist Sarkis Mazmanian believes it has found the answer, at least in one common group of bacteria: a set of genes that promotes stable microbial colonization of the gut.

A study describing the researchers' findings was published as an advance online publication of the journal Nature on August 18.    

"By understanding how these microbes colonize, we may someday be able to devise ways to correct for abnormal changes in bacterial communities—changes that are thought to be connected to disorders like obesity, inflammatory bowel disease and autism," says Mazmanian, a professor of biology at Caltech whose work explores the link between human gut bacteria and health.

The researchers began their study by running a series of experiments to introduce a genus of microbes called Bacteriodes to sterile, or germ-free, mice. Bacteriodes, a group of bacteria that has several dozen species, was chosen because it is one of the most abundant genuses in the human microbiome, can be cultured in the lab (unlike most gut bacteria), and can be genetically modified to introduce specific mutations.

"Bacteriodes are the only genus in the microbiome that fit these three criteria," Mazmanian says.

Lead author S. Melanie Lee (PhD '13), who was an MD/PhD student in Mazmanian's lab at the time of the research, first added a few different species of the bacteria to one mouse to see if they would compete with each other to colonize the gut. They appeared to peacefully coexist. Then, Lee colonized a mouse with one particular species, Bacteroides fragilis, and inoculated the mouse with the same exact species, to see if they would co-colonize the same host. To the researchers' surprise, the newly introduced bacteria could not maintain residence in the mouse's gut, despite the fact that the animal was already populated by the identical species.

"We know that this environment can house hundreds of species, so why the competition within the same species?" Lee says. "There certainly isn't a lack of space or nutrients, but this was an extremely robust and consistent finding when we tried to essentially 'super-colonize' the mice with one species."

To explain the results, Lee and the team developed what they called the "saturable niche hypothesis." The idea is that by saturating a specific habitat, the organism will effectively exclude others of the same species from occupying that niche. It will not, however, prevent other closely related species from colonizing the gut, because they have their own particular niches. A genetic screen revealed a set of previously uncharacterized genes—a system that the researchers dubbed commensal colonization factors (CCF)—that were both required and sufficient for species-specific colonization by B. fragilis.

But what exactly is the saturable niche? The colon, after all, is filled with a flowing mass of food, fecal matter and bacteria, which doesn't offer much for organisms to grab onto and occupy.

"Melanie hypothesized that this saturable niche was part of the host tissue"—that is, of the gut itself—Mazmanian says. "When she postulated this three to four years ago, it was absolute heresy, because other researchers in the field believed that all bacteria in our intestines lived in the lumen—the center of the gut—and made zero contact with the host…our bodies. The rationale behind this thinking was if bacteria did make contact, it would cause some sort of immune response."

Nonetheless, when the researchers used advanced imaging approaches to survey colonic tissue in mice colonized with B. fragilis, they found a small population of microbes living in miniscule pockets—or crypts—in the colon. Nestled within the crypts, the bacteria are protected from the constant flow of material that passes through the GI tract. To test whether or not the CCF system regulated bacterial colonization within the crypts, the team injected mutant bacteria—without the CCF system—into the colons of sterile mice. Those bacteria were unable to colonize the crypts.

"There is something in that crypt—and we don't know what it is yet—that normal B. fragilis can use to get a foothold via the CCF system," Mazmanian explains. "Finding the crypts is a huge advance in the field because it shows that bacteria do physically contact the host. And during all of the experiments that Melanie did, homeostasis, or a steady state, was maintained. So, contrary to popular belief, there was no evidence of inflammation as a result of the bacteria contacting the host. In fact, we believe these crypts are the permanent home of Bacteroides, and perhaps other classes of microbes."

He says that by pinpointing the CCF system as a mechanism for bacterial colonization and resilience, in addition to the discovery of crypts in the colon that are species specific, the current paper has solved longstanding mysteries in the field about how microbes establish and maintain long-term colonization.

"We've studied only a handful of organisms, and though they are numerically abundant, they are clearly not representative of all the organisms in the gut," Lee says. "A lot of those other bacteria don't have CCF genes, so the question now is: Do those organisms somehow rely on interactions with Bacteroides for their own colonization, or their replication rates, or their localization?"

Suspecting that Bacteroides are keystone species—a necessary factor for building the gut ecosystem—the researchers next plan to investigate whether or not functional abnormalities, such as the inability to adhere to crypts, could affect the entire microbiome and potentially lead to a diseased state in the body.

"This research highlights the notion that we are not alone. We knew that bacteria are in our gut, but this study shows that specific microbes are very intimately associated with our bodies," Mazmanian says. "They are living in very close proximity to our tissues, and we can't ignore microbial contributions to our biology or our health. They are a part of us."

Funding for the research outlined in the Nature paper, titled "Bacterial colonization factors control specificity and stability of the gut microbiota," was provided by the National Institutes of Health and the Crohn's and Colitis Foundation of America. Additional coauthors were Gregory Donaldson and Silva Boyajian from Caltech and Zbigniew Mikulski and Klaus Ley from the La Jolla Institute for Allergy and Immunology in La Jolla, California.

Katie Neith
Exclude from News Hub: 
News Type: 
Research News

Caltech Team Produces Squeezed Light Using a Silicon Micromechanical System

One of the many counterintuitive and bizarre insights of quantum mechanics is that even in a vacuum—what many of us think of as an empty void—all is not completely still. Low levels of noise, known as quantum fluctuations, are always present. Always, that is, unless you can pull off a quantum trick. And that's just what a team led by researchers at the California Institute of Technology (Caltech) has done. The group has engineered a miniature silicon system that produces a type of light that is quieter at certain frequencies—meaning it has fewer quantum fluctuations—than what is usually present in a vacuum.

This special type of light with fewer fluctuations is known as squeezed light and is useful for making precise measurements at lower power levels than are required when using normal light. Although other research groups previously have produced squeezed light, the Caltech team's new system, which is miniaturized on a silicon microchip, generates the ultraquiet light in a way that can be more easily adapted to a variety of sensor applications.

"This system should enable a new set of precision microsensors capable of beating standard limits set by quantum mechanics," says Oskar Painter, a professor of applied physics at Caltech and the senior author on a paper that describes the system; the paper appears in the August 8 issue of the journal Nature. "Our experiment brings together, in a tiny microchip package, many aspects of work that has been done in quantum optics and precision measurement over the last 40 years."

The history of squeezed light is closely associated with Caltech. More than 30 years ago, Kip Thorne, Caltech's Richard P. Feynman Professor of Theoretical Physics, Emeritus, and physicist Carlton Caves (PhD '79) theorized that squeezed light would enable scientists to build more sensitive detectors that could make more precise measurements. A decade later, Caltech's Jeff Kimble, the William L. Valentine Professor and professor of physics, and his colleagues conducted some of the first experiments using squeezed light. Since then, the LIGO (Laser Interferometer Gravitational-Wave Observatory) Scientific Collaboration has invested heavily in research on squeezed light because of its potential to enhance the sensitivity of gravitational-wave detectors.

In the past, squeezed light has been made using so-called nonlinear materials, which have unusual optical properties. This latest Caltech work marks the first time that squeezed light has been produced using silicon, a standard material. "We work with a material that's very plain in terms of its optical properties," says Amir Safavi-Naeini (PhD '13), a graduate student in Painter's group and one of three lead authors on the new paper. "We make it special by engineering or punching holes into it, making these mechanical structures that respond to light in a very novel way. Of course, silicon is also a material that is technologically very amenable to fabrication and integration, enabling a great many applications in electronics."

In this new system, a waveguide feeds laser light into a cavity created by two tiny silicon beams. Once there, the light bounces back and forth a bit thanks to the engineered holes, which effectively turn the beams into mirrors. When photons—particles of light—strike the beams, they cause the beams to vibrate. And the particulate nature of the light introduces quantum fluctuations that affect those vibrations.

Typically, such fluctuations mean that in order to get a good reading of a signal, you would have to increase the power of the light to overcome the noise. But by increasing the power you also introduce other problems, such as introducing excess heat into the system.

Ideally, then, any measurements should be made with as low a power as possible. "One way to do that," says Safavi-Naeini, "is to use light that has less noise."

And that's exactly what the new system does; it has been engineered so that the light and beams interact strongly with each other—so strongly, in fact, that the beams impart the quantum fluctuations they experience back on the light. And, as is the case with the noise-canceling technology used, for example, in some headphones, the fluctuations that shake the beams interfere with the fluctuations of the light. They effectively cancel each other out, eliminating the noise in the light.

"This is a demonstration of what quantum mechanics really says: Light is neither a particle nor a wave; you need both explanations to understand this experiment," says Safavi-Naeini. "You need the particle nature of light to explain these quantum fluctuations, and you need the wave nature of light to understand this interference."

In the experiment, a detector measuring the noise in the light as a function of frequency showed that in a frequency range centered around 28 MHz, the system produces light with less noise than what is present in a vacuum—the standard quantum limit. "But one of the interesting things," Safavi-Naeini adds, "is that by carefully designing our structures, we can actually choose the frequency at which we go below the vacuum." Many signals are specific to a particular frequency range—a certain audio band in the case of acoustic signals, or, in the case of LIGO, a frequency intimately related to the dynamics of astrophysical objects such as circling black holes. Because the optical squeezing occurs near the mechanical resonance frequency where an individual device is most sensitive to external forces, this feature would enable the system studied by the Caltech team to be optimized for targeting specific signals.

"This new way of 'squeezing light' in a silicon micro-device may provide new, significant applications in sensor technology," said Siu Au Lee, program officer at the National Science Foundation, which provided support for the work through the Institute for Quantum Information and Matter, a Physics Frontier Center. "For decades, NSF's Physics Division has been supporting basic research in quantum optics, precision measurements and nanotechnology that laid the foundation for today's accomplishments."

The paper is titled "Squeezed light from a silicon micromechanical resonator." Along with Painter and Safavi-Naeini, additional coauthors on the paper include current and former Painter-group researchers Jeff Hill (PhD '13), Simon Gröblacher (both lead authors on the paper with Safavi-Naeini), and Jasper Chan (PhD '12), as well as Markus Aspelmeyer of the Vienna Center for Quantum Science and Technology and the University of Vienna. The work was also supported by the Gordon and Betty Moore Foundation, by DARPA/MTO ORCHID through a grant from the Air Force Office of Scientific Research, and by the Kavli Nanoscience Institute at Caltech.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Figuring Out Flow Dynamics

Engineers gain insight into turbulence formation and evolution in fluids

Turbulence is all around us—in the patterns that natural gas makes as it swirls through a transcontinental pipeline or in the drag that occurs as a plane soars through the sky. Reducing such turbulence on say, an airplane wing, would cut down on the amount of power the plane has to put out just to get through the air, thereby saving fuel. But in order to reduce turbulence—a very complicated phenomenon—you need to understand it, a task that has proven to be quite a challenge.

Since 2006, Beverley McKeon, professor of aeronautics and associate director of the Graduate Aerospace Laboratories at the California Institute of Technology (Caltech) and collaborator Ati Sharma, a senior lecturer in aerodynamics and flight mechanics at the University of Southampton in the U.K., have been working together to build models of turbulent flow. Recently, they developed a new and improved way of looking at the composition of turbulence near walls, the type of flow that dominates our everyday life.

Their research could lead to significant fuel savings, as a large amount of energy is consumed by ships and planes, for example, to counteract turbulence-induced drag. Finding a way to reduce that turbulence by 30 percent would save the global economy billions of dollars in fuel costs and associated emissions annually, says McKeon, a coauthor of a study describing the new method published online in the Journal of Fluid Mechanics on July 8.

"This kind of turbulence is responsible for a large amount of the fuel that is burned to move humans, freight, and fluids such as water, oil, and natural gas, around the world," she says. "[Caltech physicist Richard] Feynman described turbulence as 'one of the last unsolved problems of classical physics,' so it is also a major academic challenge."

Wall turbulence develops when fluids—liquid or gas—flow past solid surfaces at anything but the slowest flow rates. Progress in understanding and controlling wall turbulence has been somewhat incremental because of the massive range of scales of motion involved—from the width of a human hair to the height of a multi-floor building in relative terms—says McKeon, who has been studying turbulence for 16 years. Her latest work, however, now provides a way of analyzing a large-scale flow by breaking it down into discrete, more easily analyzed bits. 

McKeon and Sharma devised a new method of looking at wall turbulence by reformulating the equations that govern the motion of fluids—called the Navier-Stokes equations—into an infinite set of smaller, simpler subequations, or "blocks," with the characteristic that they can be simply added together to introduce more complexity and eventually get back to the full equations. But the benefit comes in what can be learned without needing the complexity of the full equations. Calling the results from analysis of each one of those blocks a "response mode," the researchers have shown that commonly observed features of wall turbulence can be explained by superposing, or adding together, a very small number of these response modes, even as few as three. 

In 2010, McKeon and Sharma showed that analysis of these blocks can be used to reproduce some of the characteristics of the velocity field, like the tendency of wall turbulence to favor eddies of certain sizes and distributions. Now, the researchers also are using the method to capture coherent vortical structure, caused by the interaction of distinct, horseshoe-shaped spinning motions that occur in turbulent flow. Increasing the number of blocks included in an analysis increases the complexity with which the vortices are woven together, McKeon says. With very few blocks, things look a lot like the results of an extremely expensive, real-flow simulation or a full laboratory experiment, she says, but the mathematics are simple enough to be performed, mode-by-mode, on a laptop computer.

"We now have a low-cost way of looking at the 'skeleton' of wall turbulence," says McKeon, explaining that similar previous experiments required the use of a supercomputer. "It was surprising to find that turbulence condenses to these essential building blocks so easily. It's almost like discovering a lens that you can use to focus in on particular patterns in turbulence."

Using this lens helps to reduce the complexity of what the engineers are trying to understand, giving them a template that can be used to try to visually—and mathematically—identify order from flows that may appear to be chaotic, she says. Scientists had proposed the existence of some of the patterns based on observations of real flows; using the new technique, these patterns now can be derived mathematically from the governing equations, allowing researchers to verify previous models of how turbulence works and improve upon those ideas.

Understanding how the formulation can capture the skeleton of turbulence, McKeon says, will allow the researchers to modify turbulence in order to control flow and, for example, reduce drag or noise.

"Imagine being able to shape not just an aircraft wing but the characteristics of the turbulence in the flow over it to optimize aircraft performance," she says. "It opens the doors for entirely new capabilities in vehicle performance that may reduce the consumption of even renewable or non-fossil fuels."

Funding for the research outlined in the Journal of Fluid Mechanics paper, titled "On coherent structure in wall turbulence," was provided by the Air Force Office of Scientific Research. The paper is the subject of a "Focus on Fluids" feature article that will appear in an upcoming print issue of the same journal and was written by Joseph Klewicki of the University of New Hampshire. 

Katie Neith
Exclude from News Hub: 
News Type: 
Research News

Pushing Microscopy Beyond Standard Limits

Caltech engineers show how to make cost-effective, ultra-high-performance microscopes

Engineers at the California Institute of Technology (Caltech) have devised a method to convert a relatively inexpensive conventional microscope into a billion-pixel imaging system that significantly outperforms the best available standard microscope. Such a system could greatly improve the efficiency of digital pathology, in which specialists need to review large numbers of tissue samples. By making it possible to produce robust microscopes at low cost, the approach also has the potential to bring high-performance microscopy capabilities to medical clinics in developing countries.

"In my view, what we've come up with is very exciting because it changes the way we tackle high-performance microscopy," says Changhuei Yang, professor of electrical engineering, bioengineering and medical engineering at Caltech.  

Yang is senior author on a paper that describes the new imaging strategy, which appears in the July 28 early online version of the journal Nature Photonics.

Until now, the physical limitations of microscope objectives—their optical lenses— have posed a challenge in terms of improving conventional microscopes. Microscope makers tackle these limitations by using ever more complicated stacks of lens elements in microscope objectives to mitigate optical aberrations. Even with these efforts, these physical limitations have forced researchers to decide between high resolution and a small field of view on the one hand, or low resolution and a large field of view on the other. That has meant that scientists have either been able to see a lot of detail very clearly but only in a small area, or they have gotten a coarser view of a much larger area.

"We found a way to actually have the best of both worlds," says Guoan Zheng, lead author on the new paper and the initiator of this new microscopy approach from Yang's lab. "We used a computational approach to bypass the limitations of the optics. The optical performance of the objective lens is rendered almost irrelevant, as we can improve the resolution and correct for aberrations computationally."

Indeed, using the new approach, the researchers were able to improve the resolution of a conventional 2X objective lens to the level of a 20X objective lens. Therefore, the new system combines the field-of-view advantage of a 2X lens with the resolution advantage of a 20X lens. The final images produced by the new system contain 100 times more information than those produced by conventional microscope platforms. And building upon a conventional microscope, the new system costs only about $200 to implement.

"One big advantage of this new approach is the hardware compatibility," Zheng says, "You only need to add an LED array to an existing microscope. No other hardware modification is needed. The rest of the job is done by the computer."  

The new system acquires about 150 low-resolution images of a sample. Each image corresponds to one LED element in the LED array. Therefore, in the various images, light coming from known different directions illuminates the sample. A novel computational approach, termed Fourier ptychographic microscopy (FPM), is then used to stitch together these low-resolution images to form the high-resolution intensity and phase information of the sample—a much more complete picture of the entire light field of the sample.

Yang explains that when we look at light from an object, we are only able to sense variations in intensity. But light varies in terms of both its intensity and its phase, which is related to the angle at which light is traveling.

"What this project has developed is a means of taking low-resolution images and managing to tease out both the intensity and the phase of the light field of the target sample," Yang says. "Using that information, you can actually correct for optical aberration issues that otherwise confound your ability to resolve objects well."

The very large field of view that the new system can image could be particularly useful for digital pathology applications, where the typical process of using a microscope to scan the entirety of a sample can take tens of minutes. Using FPM, a microscope does not need to scan over the various parts of a sample—the whole thing can be imaged all at once. Furthermore, because the system acquires a complete set of data about the light field, it can computationally correct errors—such as out-of-focus images—so samples do not need to be rescanned.

"It will take the same data and allow you to perform refocusing computationally," Yang says.

The researchers say that the new method could have wide applications not only in digital pathology but also in everything from hematology to wafer inspection to forensic photography. Zheng says the strategy could also be extended to other imaging methodologies, such as X-ray imaging and electron microscopy.

The paper is titled "Wide-field, high-resolution Fourier ptychographic microscopy." Along with Yang and Zheng, Caltech graduate student Roarke Horstmeyer is also a coauthor. The work was supported by a grant from the National Institutes of Health.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Seeing Snow in Space

Caltech helps capture the first image of a frosty planetary-disk region

Although it might seem counterintuitive, if you get far enough away from a smoldering young star, you can actually find snow lines—frosty regions where gases are able to freeze and coat dust grains. Astronomers believe that these snow lines are critical to the process of planet formation.

Now an international team of researchers, including Caltech's Geoffrey Blake, has used the Atacama Large Millimeter/submillimeter Array (ALMA) to capture the first image of a snow line around a Sun-like star. The findings appear in the current issue of Science Express.

"This first direct imaging of such internal chemical structures in an analog of the young solar nebula was made possible by the extraordinary sensitivity and resolution of the not-yet-completed ALMA and builds on decades of pioneering research in millimeter-wave interferometry at the Caltech Owens Valley Radio Observatory, by universities now part of the Combined Array for Research in Millimeter-wave Astronomy, and by the Harvard-Smithsonian Submillimeter Array," says Blake, a professor of cosmochemistry and planetary science and professor of chemistry at Caltech. "The role of these facilities, in research, in technology development, and in education, along the road to ALMA cannot be overstated."

Since different gases freeze at different distances from the star, snow lines are thought to exist as concentric rings of grains encased in the various frozen gases—a ring of grains coated with water ice, a ring of grains coated with carbon dioxide, and so on. They might speed up planet formation by providing a source of solid material and by coating and protecting dust grains that would normally collide with one another and break apart.

Earlier this year, Blake and his group used spectrometers onboard the Spitzer Space Telescope and Herschel Space Observatory to constrain the location of the water snow line in a star known as TW Hydrae. The star is of particular interest because it is the nearest example of a gas- and dust-rich protoplanetary disk that may show similarities to our own solar system at an age of only 10 million years.

Snow lines have escaped direct imaging up until this point because of the obscuring effect of the hot gases that exist above and below them. But thanks to work at the Harvard-Smithsonian Submillimeter Array and at Caltech, the team had a good idea of where to begin looking. Additionally, the lead authors of the new paper, Chunhua "Charlie" Qi (PhD '01), now of the Harvard-Smithsonian Center for Astrophysics, and Karin Öberg (BS '05), currently at Harvard University, figured out a novel way to trace the presence of frozen carbon monoxide—a trick that enabled them to use ALMA to chemically highlight TW Hydrae's carbon monoxide snow line.

"The images from ALMA spectacularly confirm the presence of snow lines in disks," Blake says. "We are eagerly looking forward to additional studies with the full ALMA telescope—especially those targeting less volatile species such as water and organics that are critical to habitability."

The paper is titled "Imaging of the CO snow line in a solar nebula analog." A full press release about the work can be found here.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Evidence for a Martian Ocean

Researchers at the California Institute of Technology (Caltech) have discovered evidence for an ancient delta on Mars where a river might once have emptied into a vast ocean.

This ocean, if it existed, could have covered much of Mars's northern hemisphere—stretching over as much as a third of the planet.

"Scientists have long hypothesized that the northern lowlands of Mars are a dried-up ocean bottom, but no one yet has found the smoking gun," says Mike Lamb, an assistant professor of geology at Caltech and a coauthor of the paper describing the results. The paper was published online in the July 12 issue of the Journal of Geophysical Research.

Although the new findings are far from proof of the existence of an ancient ocean, they provide some of the strongest support yet, says Roman DiBiase, a postdoctoral scholar at Caltech and lead author of the paper.

Most of the northern hemisphere of Mars is flat and at a lower elevation than the southern hemisphere, and thus appears similar to the ocean basins found on Earth. The border between the lowlands and the highlands would have been the coastline for the hypothetical ocean.

The Caltech team used new high-resolution images from the Mars Reconnaissance Orbiter (MRO) to study a 100-square-kilometer area that sits right on this possible former coastline. Previous satellite images have shown that this area—part of a larger region called Aeolis Dorsa, which is about 1,000 kilometers away from Gale Crater where the Curiosity rover is now roaming—is covered in ridge-like features called inverted channels.

These inverted channels form when coarse materials like large gravel and cobbles are carried along rivers and deposited at their bottoms, building up over time. After the river dries up, the finer material—such as smaller grains of clay, silt, and sand—around the river erodes away, leaving behind the coarser stuff. This remaining sediment appears as today's ridge-like features, tracing the former river system.

When looked at from above, the inverted channels appear to fan out, a configuration that suggests one of three possible origins: the channels could have once been a drainage system in which streams and creeks flowed down a mountain and converged to form a larger river; the water could have flowed in the other direction, creating an alluvial fan, in which a single river channel branches into multiple smaller streams and creeks; or the channels are actually part of a delta, which is similar to an alluvial fan except that the smaller streams and creeks empty into a larger body of water such as an ocean.

To figure out which of these scenarios was most likely, the researchers turned to satellite images taken by the HiRISE camera on MRO. By taking pictures from different points in its orbit, the spacecraft was able to make stereo images that have allowed scientists to determine the topography of the martian surface. The HiRISE camera can pick out features as tiny as 25 centimeters long on the surface and the topographic data can distinguish changes in elevation at a resolution of 1 meter.

Using this data, the Caltech researchers analyzed the stratigraphic layers of the inverted channels, piecing together the history of how sediments were deposited along these ancient rivers and streams. The team was able to determine the slopes of the channels back when water was still coursing through them. Such slope measurements can reveal the direction of water flow—in this case, showing that the water was spreading out instead of converging, meaning the channels were part of an alluvial fan or a delta.

But the researchers also found evidence for an abrupt increase in slope of the sedimentary beds near the downstream end of the channels. That sort of steep slope is most common when a stream empties into a large body of water—suggesting that the channels are part of a delta and not an alluvial fan.

Scientists have discovered martian deltas before, but most are inside a geological boundary, like a crater. Water therefore would have most likely flowed into a lake enclosed by such a boundary and so did not provide evidence for an ocean.

But the newly discovered delta is not inside a crater or other confining boundary, suggesting that the water likely emptied into a large body of water like an ocean. "This is probably one of the most convincing pieces of evidence of a delta in an unconfined region—and a delta points to the existence of a large body of water in the northern hemisphere of Mars," DiBiase says. This large body of water could be the ocean that has been hypothesized to have covered a third of the planet. At the very least, the researchers say, the water would have covered the entire Aerolis Dorsa region, which spans about 100,000 square kilometers.

Of course, there are still other possible explanations. It is plausible, for instance, that at one time there was a confining boundary—such as a large crater—that was later erased, Lamb adds. But that would require a rather substantial geological process and would mean that the martian surface was more geologically active than has been previously thought.

The next step, the researchers say, is to continue exploring the boundary between the southern highlands and northern lowlands—the hypothetical ocean coastline—and analyze other sedimentary deposits to see if they yield more evidence for an ocean. 

"In our work and that of others—including the Curiosity rover—scientists are finding a rich sedimentary record on Mars that is revealing its past environments, which include rain, flowing water, rivers, deltas, and potentially oceans," Lamb says. "Both the ancient environments on Mars and the planet's sedimentary archive of these environments are turning out to be surprisingly Earth-like."

The title of the Journal of Geophysical Research paper is "Deltaic deposits at Aeolis Dorsa: Sedimentary evidence for a standing body of water on the northern plains of Mars." In addition to DiBiase and Lamb, the other authors of the paper are graduate students Ajay Limaye and Joel Scheingross, and Woodward Fischer, assistant professor of geobiology. This research was supported by the National Science Foundation, NASA, and Caltech.

Marcus Woo
Exclude from News Hub: 
News Type: 
Research News

New Research Sheds Light on M.O. of Unusual RNA Molecules

The genes that code for proteins—more than 20,000 in total—make up only about 1 percent of the complete human genome. That entire thing—not just the genes, but also genetic junk and all the rest—is coiled and folded up in any number of ways within the nucleus of each of our cells. Think, then, of the challenge that a protein or other molecule, like RNA, faces when searching through that material to locate a target gene.

Now a team of researchers led by newly arrived biologist Mitchell Guttman of the California Institute of Technology (Caltech) and Kathrin Plath of UCLA, has figured out how some RNA molecules take advantage of their position within the three-dimensional mishmash of genomic material to home in on targets. The research appears in the current issue of Science Express.

The findings suggest a unique role for a class of RNAs, called lncRNAs, which Guttman and his colleagues at the Broad Institute of MIT and Harvard first characterized in 2009. Until then, these lncRNAs—short for long, noncoding RNAs and pronounced "link RNAs"—had been largely overlooked because they lie in between the genes that code for proteins. Guttman and others have since shown that lncRNAs scaffold, or bring together and organize, key proteins involved in the packaging of genetic information to regulate gene expression—controlling cell fate in some stem cells, for example.

In the new work, the researchers found that lncRNAs can easily locate and bind to nearby genes. Then, with the help of proteins that reorganize genetic material, the molecules can pull in additional related genes and move to new sites, building up a "compartment" where many genes can be regulated all at once.

"You can now think about these lncRNAs as a way to bring together genes that are needed for common function into a single physical region and then regulate them as a set, rather than individually," Guttman says. "They are not just scaffolds of proteins but actual organizers of genes."

The new work focused on Xist, a lncRNA molecule that has long been known to be involved in turning off one of the two X chromosomes in female mammals (something that must happen in order for the genome to function properly). Quite a bit has been uncovered about how Xist achieves this silencing act. We know, for example, that it binds to the X chromosome; that it recruits a chromatin regulator to help it organize and modify the structure of the chromatin; and that certain distinct regions of the RNA are necessary to do all of this work. Despite this knowledge, it had been unknown at the molecular level how Xist actually finds its targets and spreads across the X chromosome.

To gain insight into that process, Guttman and his colleagues at the Broad Institute developed a method called RNA Antisense Purification (RAP) that, by sequencing DNA at high resolution, gave them a way to map out exactly where different lncRNAs go. Then, working with Plath's group at UCLA, they used their method to watch in high resolution as Xist was activated in undifferentiated mouse stem cells, and the process of X-chromosome silencing proceeded.

"That's where this got really surprising," Guttman says. "It wasn't that somehow this RNA just went everywhere, searching for its target. There was some method to its madness. It was clear that this RNA actually used its positional information to find things that were very far away from it in genome space, but all of those genes that it went to were really close to it in three-dimensional space."

Before Xist is activated, X-chromosome genes are all spread out. But, the researchers found, once Xist is turned on, it quickly pulls in genes, forming a cloud. "And it's not just that the expression levels of Xist get higher and higher," Guttman says. "It's that Xist brings in all of these related genes into a physical nuclear structure. All of these genes then occupy a single territory."

The researchers found that a specific region of Xist, known as the A-repeat domain, that is known to be vital for the lncRNA's ability to silence X-chromosome genes is also needed to pull in all the genes that it needs to silence. When the researchers deleted the domain, the X chromosome did not become inactivated, because the silencing compartment did not form.

One of the most exciting aspects of the new research, Guttman says, is that it has implications beyond just explaining how Xist works. "In our paper, we talk a lot about Xist, but these results are likely to be general to other lncRNAs," he says. He adds that the work provides one of the first direct pieces of evidence to explain what makes lncRNAs special. "LncRNAs, unlike proteins, really can use their genomic information—their context, their location—to act, to bring together targets," he says. "That makes them quite unique."  

The new paper is titled "The Xist lncRNA exploits three-dimensional genome architecture to spread across the X-chromosome." Along with Guttman and Plath, additional coauthors are Jesse M. Engreitz, Patrick McDonel, Alexander Shishkin, Klara Sirokman, Christine Surka, Sabah Kadri, Jeffrey Xing, Along Goren, and Eric Lander of the Broad Institute of Harvard and MIT; as well as Amy Pandya-Jones of UCLA. The work was funded by an NIH Director's Early Independence Award, the National Human Genome Research Institute Centers of Excellence in Genomic Sciences, the California Institute for Regenerative Medicine, and funds from the Broad Institute and from UCLA's Eli and Edythe Broad Center for Regenerative Medicine and Stem Cell Research. 

Kimm Fesenmaier
Exclude from News Hub: 

Psychology Influences Markets

When it comes to economics versus psychology, score one for psychology.

Economists argue that markets usually reflect rational behavior—that is, the dominant players in a market, such as the hedge-fund managers who make billions of dollars' worth of trades, almost always make well-informed and objective decisions. Psychologists, on the other hand, say that markets are not immune from human irrationality, whether that irrationality is due to optimism, fear, greed, or other forces.

Now, a new analysis published the week of July 1 in the online issue of the Proceedings of the National Academy of Sciences (PNAS) supports the latter case, showing that markets are indeed susceptible to psychological phenomena. "There's this tug-of-war between economics and psychology, and in this round, psychology wins," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at the California Institute of Technology (Caltech) and the corresponding author of the paper.

Indeed, it is difficult to claim that markets are immune to apparent irrationality in human behavior. "The recent financial crisis really has shaken a lot of people's faith," Camerer says. Despite the faith of many that markets would organize allocations of capital in ways that are efficient, he notes, the government still had to bail out banks, and millions of people lost their homes.

In their analysis, the researchers studied an effect called partition dependence, in which breaking down—or partitioning—the possible outcomes of an event in great detail makes people think that those outcomes are more likely to happen. The reason, psychologists say, is that providing specific scenarios makes them more explicit in people's minds. "Whatever we're thinking about, seems more likely," Camerer explains.

For example, if you are asked to predict the next presidential election, you may say that a Democrat has a 50/50 chance of winning and a Republican has a 50/50 chance of winning. But if you are asked about the odds that a particular candidate from each party might win—for example, Hillary Clinton versus Chris Christie—you are likely to envision one of them in the White House, causing you to overestimate his or her odds.

The researchers looked for this bias in a variety of prediction markets, in which people bet on future events. In these markets, participants buy and sell claims on specific outcomes, and the prices of those claims—as set by the market—reflect people's beliefs about how likely it is that each of those outcomes will happen. Say, for example, that the price for a claim that the Miami Heat will win 16 games during the NBA playoffs is $6.50 for a $10 return. That means that, in the collective judgment of the traders, Miami has a 65 percent chance of winning 16 games.

The researchers created two prediction markets via laboratory experiments and studied two others in the real world. In one lab experiment, which took place in 2006, volunteers traded claims on how many games an NBA team would win during the 2006 playoffs and how many goals a team would score in the 2006 World Cup. The volunteers traded claims on 16 teams each for the NBA playoffs and the World Cup.

In the basketball case, one group of volunteers was asked to bet on whether the Miami Heat would win 4–7 playoff games, 8–11 games, or some other range. Another group was given a range of 4–11 games, which combined the two intervals offered to the first group. Then, the volunteers traded claims on each of the intervals within their respective groups. As with all prediction markets, the price of a traded claim reflected the traders' estimations of whether the total number of games won by the Heat would fall within a particular range.

Economic theory says that the first group's perceived probability of the Heat winning 4–7 games and its perceived probability of winning 8–11 games should add up to a total close to the second group's perceived probability of the team winning 4–11 games. But when they added the numbers up, the researchers found instead that the first group thought the likelihood of the team winning 4–7 or 8–11 games higher than did the second group, which was asked about the probability of them winning 4–11 games. All of this suggests that framing the possible outcomes in terms of more specific intervals caused people to think that those outcomes were more likely.

The researchers observed similar results in a second, similar lab experiment, and in two studies of natural markets—one involving a series of 153 prediction markets run by Deutsche Bank and Goldman Sachs, and another involving long-shot horses in horse races.

People tend to bet more money on a long-shot horse, because of its higher potential payoff, and they also tend to overestimate the chance that such a horse will win. Statistically, however, a horse's chance of winning a particular race is the same regardless of how many other horses it's racing against—a horse who habitually wins just five percent of the time will continue to do so whether it is racing against fields of 5 or of 11. But when the researchers looked at horse-race data from 1992 through 2001—a total of 6.3 million starts—they found that bettors were subject to the partition bias, believing that long-shot horses had higher odds of winning when they were racing against fewer horses.

While partition dependence has been looked at in the past in specific lab experiments, it hadn't been studied in prediction markets, Camerer says. What makes this particular analysis powerful is that the researchers observed evidence for this phenomenon in a wide range of studies—short, well-controlled laboratory experiments; markets involving intelligent, well-informed traders at major financial institutions; and nine years of horse-racing data.

The title of the PNAS paper is "How psychological framing affects economic market prices in the lab and field." In addition to Camerer, the other authors are Ulrich Sonnemann and Thomas Langer at the University of Münster, Germany, and Craig Fox at UCLA. Their research was supported by the German Research Foundation, the National Science Foundation, the Gordon and Betty Moore Foundation, and the Human Frontier Science Program.

Marcus Woo
Exclude from News Hub: 
News Type: 
Research News


Subscribe to RSS - research_news