Spirals of Light May Lead to Better Electronics

A group of researchers at the California Institute of Technology (Caltech) has created the optical equivalent of a tuning fork—a device that can help steady the electrical currents needed to power high-end electronics and stabilize the signals of high-quality lasers. The work marks the first time that such a device has been miniaturized to fit on a chip and may pave the way to improvements in high-speed communications, navigation, and remote sensing.

"When you're tuning a piano, a tuning fork gives a standardized pitch, or reference sound frequency; in optical resonators the 'pitch' corresponds to the color, or wavelength, of the light. Our device provides a consistent light frequency that improves both optical and electronic devices when it is used as a reference," says Kerry Vahala, Ted and Ginger Jenkins Professor of Information Science and Technology and Applied Physics. Vahala is also executive officer for applied physics and materials science and an author on the study describing this new work, published in the journal Nature Communications.

A good tuning fork controls the release of its acoustical energy, ringing just one pitch at a particular sound frequency for a long time; this sustaining property is called the quality factor. Vahala and his colleagues transferred this concept to their optical resonator, focusing on the optical quality factor and other elements that affect frequency stability.

The researchers were able to stabilize the light's frequency by developing a silica glass chip resonator with a specially designed path for the photons in the shape of what is called an Archimedean spiral. "Using this shape allows the longest path in the smallest area on a chip. We knew that if we made the photons travel a longer path, the whole device would become more stable," says Hansuek Lee, a senior researcher in Vahala's lab and lead author on the paper.

Frequency instability stems from energy surges within the optical resonator—which are unavoidable due to the laws of thermodynamics. Because the new resonator has a longer path, the energy changes are diluted, so the power surges are dampened—greatly improving the consistency and quality of the resonator's reference signal, which, in turn, improves the quality of the electronic or optical device.

In the new design, photons are applied to an outer ring of the spiraled resonator with a tiny light-dispensing optic fiber; the photons subsequently travel around four interwoven Archimedean spirals, ultimately closing the path after traveling more than a meter in an area about the size of a quarter—a journey 100 times longer than achieved in previous designs. In combination with the resonator, a special guide for the light was used, losing 100 times less energy than the average chip-based device.

In addition to its use as a frequency reference for lasers, a reference cavity could one day play a role equivalent to that of the ubiquitous quartz crystal in electronics. Most electronics systems use a device called an oscillator to provide power at very precise frequencies. In the past several years, optical-based oscillators—which require optical reference cavities—have become better than electronic oscillators at delivering stable microwave and radio frequencies. While these optical oscillators are currently too large for use in small electronics, there is an effort under way to miniaturize their key subcomponents—like Vahala's chip-based reference cavity.

"A miniaturized optical oscillator will represent a shift in the traditional roles of photonics and electronics. Currently, electronics perform signal processing while photonics rule in transporting information from one place to another over fiber-optic cable. Eventually, oscillators in high-performance electronics systems, while outwardly appearing to be electronic devices, will internally be purely optical," Vahala says.

"The technology that Kerry and his group have introduced opens a new avenue to move precision optical frequency sources out of the lab and onto a compact, robust and integrable silicon-based platform," says Scott Diddams, physicist and project leader at the National Institute of Standards and Technology, recent Moore Distinguished Scholar at Caltech and a coauthor on the study. "It opens up many new and unexplored options for building systems that could have greater impact to 'real-world' applications," Diddams says.

The paper, titled "Spiral resonators for on-chip laser frequency stabilization," was published online in Nature Communications on September 17. Other Caltech coauthors on the study include graduate students Myoung Gyun Suh and Tong Chen (PhD '13), and postdoctoral scholar Jiang Li (PhD '13). The project was in collaboration with Caltech startup company hQphotonics. This work was funded by the Defense Advanced Research Projects Agency; the Caltech's Kavli Nanoscience Institute; and the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center with support of the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

New Gut Bacterium Discovered in Termite's Digestion of Wood

Caltech researchers find new species of microbe responsible for acetogenesis, an important process in termite nutrition.

When termites munch on wood, the small bits are delivered to feed a community of unique microbes living in their guts, and in a complex process involving multiple steps, these microbes turn the hard, fibrous material into a nutritious meal for the termite host. One key step uses hydrogen to convert carbon dioxide into organic carbon—a process called acetogenesis—but little is known about which gut bacteria play specific roles in the process. Utilizing a variety of experimental techniques, researchers from the California Institute of Technology (Caltech) have now discovered a previously unidentified bacterium—living on the surface of a larger microorganism in the termite gut—that may be responsible for most gut acetogenesis.

"In the termite gut, you have several hundred different species of microbes that live within a millimeter of one another. We know certain microbes are present in the gut, and we know microbes are responsible for certain functions, but until now, we didn't have a good way of knowing which microbes are doing what," says Jared Leadbetter, professor of environmental microbiology at Caltech, in whose laboratory much of the research was performed. He is also an author of a paper about the work published the week of September 16 in the online issue of the Proceedings of the National Academy of Sciences (PNAS).

Acetogenesis is the production of acetate (a source of nutrition for termites) from the carbon dioxide and hydrogen generated by gut protozoa as they break down decaying wood. In their study of "who is doing what and where," Leadbetter and his colleagues searched the entire pool of termite gut microbes to identify specific genes from organisms responsible for acetogenesis.

The researchers began by sifting through the microbes' RNA—genetic information that can provide a snapshot of the genes active at a certain point in time. Using RNA from the total pool of termite gut microbes, they searched for actively transcribed formate dehydrogenase (FDH) genes, known to encode a protein necessary for acetogenesis. Next, using a method called multiplex microfluidic digital polymerase chain reaction (digital PCR), the researchers sequestered the previously unstudied individual microbes into tiny compartments to identify the actual microbial species carrying each of the FDH genes. Some of the FDH genes were found in types of bacteria known as spirochetes—a previously predicted source of acetogenesis. Yet it appeared that these spirochetes alone could not account for all of the acetate produced in the termite gut.

Initially, the Caltech researchers were unable to identify the microorganism expressing the single most active FDH gene in the gut. However, the first authors on the study, Adam Rosenthal, a postdoctoral scholar in biology at Caltech, and Xinning Zhang (PhD '10, Environmental Science and Engineering), noticed that this gene was more abundant in the portion of the gut extract containing wood chunks and larger microbes, like protozoans. After analyzing the chunkier gut extract, they discovered that the single most active FDH gene was encoded by a previously unstudied species from a group of microbes known as the deltaproteobacteria. This was the first evidence that a substantial amount of acetate in the gut may be produced by a non-spirochete.

Because the genes from this deltaproteobacterium were found in the chunky particulate matter of the termite gut, the researchers thought that perhaps the newly identified microbe attaches to the surface of one of the chunks. To test this hypothesis, the researchers used a color-coded visualization method called hybridization chain reaction-fluorescent in situ hybridization, or HCR-FISH.

The technique—developed in the laboratory of Niles Pierce, professor of applied and computational mathematics and bioengineering at Caltech, and a coauthor on the PNAS study—allowed the researchers to simultaneously "paint" cells expressing both the active FDH gene and a gene identifying the deltoproteobacterium with different fluorescent colors simultaneously. "The microfluidics experiment suggested that the two colors should be expressed in the same location and in the same tiny cell," Leadbetter says. And, indeed, they were. "Through this approach, we were able to actually see where the new deltaproteobacterium resided. As it turns out, the cells live on the surface of a very particular hydrogen-producing protozoan."

This association between the two organisms makes sense based on what is known about the complex food web of the termite gut, Leadbetter says. "Here you have a large eukaryotic single cell—a protozoan—which is making hydrogen as it degrades wood, and you have these much smaller hydrogen-consuming deltaproteobacteria attached to its surface," he says. "So, this new acetogenic bacterium is snuggled up to its source of hydrogen just as close as it can get."

This intimate relationship, Leadbetter says, might never have been discovered relying on phylogenetic inference—the standard method for matching a function to a specific organism. "Using phylogenetic inference, we say, 'We know a lot about this hypothetical organism's relatives, so without ever seeing the organism, we're going to make guesses about who it is related to," he says. "But with the techniques in this study, we found that our initial prediction was wrong. Importantly, we have been able to determine the specific organism responsible and a location of the mystery organism, both of which appear to be extremely important in the consumption of hydrogen and turning it into a product the insect can use." These results not only identify a new source for acetogenesis in the termite gut—they also reveal the limitations of making predictions based exclusively on phylogenetic relationships.

Other Caltech coauthors on the paper titled "Localizing transcripts to single cells suggests an important role of uncultured deltaproteobacteria in the termite gut hydrogen economy," are graduate student Kaitlyn S. Lucey (environmental science and engineering), Elizabeth A. Ottesen (PhD '08, biology), graduate student Vikas Trivedi (bioengineering), and research scientist Harry M. T. Choi (PhD '10, bioengineering). This work was funded by the U.S. Department of Energy, the National Science Foundation, the National Institutes of Health, the Programmable Molecular Technology Center within the Beckman Institute at Caltech, a Donna and Benjamin M. Rosen Center Bioengineering scholarship, and the Center for Environmental Microbial Interactions at Caltech.

Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

What Causes Some to Participate in Bubble Markets?

Caltech research shows neural underpinnings of financially risky behavior

During financial bubbles, such as the one that centered around the U.S. housing market and triggered the Great Recession, some investors react differently than others. Some rush in, trying to "time" the market's rise and fall, while others play it safe and bow out. Ever wonder what accounts for such differences? New neuroeconomic research at the California Institute of Technology (Caltech) has found that the investors most likely to take a risk and fuel bubble markets are those with good "theory of mind" skills—those who are good at "putting themselves in others' shoes." They think the most about the motives behind prices and what other people in the market are likely to do next, but during bubble markets, that actually becomes risky behavior.

The finding is contrary to what some economists have suggested—that financial bubbles are driven by confusion or denial on the part of investors and traders.

"What we find is that the people who are most susceptible to bubbles are not just reckless traders getting caught up in a frenzy," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at Caltech. "Instead, when there are unusual patterns in trading activity, these people are actually thinking a lot about what it means, and they're deciding to jump in."

Camerer is one of the principal investigators on a new paper describing the study and its results in the September 16 issue of the journal Neuron. The study was led by Benedetto De Martino, senior research fellow at Royal Holloway, University of London, while he was a postdoctoral scholar at Caltech.

An important message from the study, De Martino says, is that it shows "when we interact with complex modern institutions, like financial markets, the same neural computational mechanisms that have been extremely advantageous in our evolutionary history can turn against us, biasing our choices with potentially catastrophic effects." Indeed, theory of mind is typically considered a beneficial skill that can help an individual navigate everything from everyday social situations to emergency scenarios.

The findings center around two regions of the brain. One, called the ventromedial prefrontal cortex (vmPFC), can be thought of as "the brain's accountant" because it encodes value. The other, the dorsomedial prefrontal cortex (dmPFC), is strongly associated with theory of mind.

In the study, the researchers used functional magnetic resonance imaging (fMRI) to monitor blood flow in the brains of student participants as they interacted with replayed financial market experiments. Such blood flow is considered a proxy for brain activity. Each participant was given $60 and then served as an outside observer of a series of six trading sessions involving other traders; each trading session lasted 15 periods, and after each period the dividend for the traded asset decreased by $0.24. At various points during the trial, the students were asked to imagine that they were traders and to decide whether they would want to stick with their current holdings or buy or sell shares at the going price.

In half of the sessions, trading resulted in a bubble market in which the prices ended up significantly higher than the actual, or fundamental, value of the asset being traded. In the three other sessions, prices tracked fairly well with the fundamental value, and never exceeded it.

The researchers found that the formation of bubbles was linked to increased activity in the vmPFC, that "accounting" part of the brain that processes value judgments.

Next, they investigated the question whether the people who were more susceptible to participating in, or "riding," bubbles showed heightened activity in the same brain region. The answer? Yes—those who were willing to participate in the bubble market again displayed more activity in the vmPFC.

To further investigate the theory of mind connection, the researchers asked participants to take the well-known "mind in the eyes" test. The test challenges test takers to choose the word that best describes what various people are thinking or feeling, based solely on pictures of their eyes. The researchers found that study participants who scored highest on the test, and thus discerned the correct feelings most accurately, also showed stronger links between their portfolio values and activity in the dmPFC, one of the brain regions linked to theory of mind activity.

"The way we interpret this is that these people were thinking more about what was going on in the market and wondering why people were behaving the way they were," Camerer explains. "Normally, in everyday social encounters and in specialized professions, this kind of mind reading is useful to the individual. But in these markets, when prices are going crazy, these people think, 'Wow, I think I can figure these markets out. Let me buy and sell.' And that is usually going to contribute to the bubble's momentum and also cost them money."

One of the most innovative parts of the study involved using a new mathematical formula for detecting unusual activity in the trading market. Unlike normal markets in which the mathematical distribution of the arrival of "orders" (offers to buy or sell shares) follows a somewhat steady pattern, bubble markets display restlessness—with flurries of activity followed by lulls. The researchers looked to see if any brain regions showed signs of tracking this unusual distribution of orders during bubble markets. And they found a strong association with the dmPFC and vmPFC. Heightened activity in these prefrontal regions, the team suspects, is a sign that participants are more likely to ride the bubble market, perhaps because they subconsciously believe that there are insiders with extra information operating within the market.

Another of the paper's senior authors, Peter Bossaerts, completed the work at Caltech and is now at the University of Utah. He explains: "It's group illusion. When participants see the inconsistency in order flow, they think that there are people who know better in the marketplace and they make a game out of it. In reality, however, there is nothing to be gained because nobody knows better."

The research could eventually help in the design of better social and financial interventions to avoid the formation of bubbles in financial markets, as well as methods for individual traders and brokers to manage their trading better.

The Neuron paper is titled "In the Mind of the Market: Theory of Mind Biases Value Computation During Financial Bubbles." Along with Camerer, De Martino, and Bossaerts, additional Caltech coauthors are John O'Doherty, professor of psychology, and Debajyoti Ray, a graduate student in Computation and Neural Systems. The work was supported by a Sir Henry Wellcome Postdoctoral Fellowship, the Betty and Gordon Moore Foundation, and the Lipper Family Foundation.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech-led WormBase Project Awarded $14.8 Million by NIH

Over the next five years, WormBase—a Caltech-led, multi-institutional effort to make genetic information about nematodes, or roundworms, freely available to the world—will receive $14.8 million in additional funding from the National Institutes of Health. As many as 1 million nematode species are thought to live on Earth, and many are pests or parasites that ravage crops and spread diseases. They also happen to share many genes that are found in humans. Therefore, the squirmy creatures are intensively researched by labs around the world.

The WormBase project began in 2000 with the original goal of creating an online clearinghouse for data related to the most widely studied nematode, the model organism Caenorhabditis elegans. The project's website (www.wormbase.org) now hosts genomic data for more than 50 nematodes as well as vast amounts of other experimental data. In fact, about 1,200 scientific papers are added to the searchable database every year. And with more than 1,000 laboratories currently registered as users, WormBase has become an invaluable tool for the biomedical and agricultural research communities.

"WormBase has made it much easier for bench researchers to access a lot of information that they need much more rapidly. That accelerates research," says Paul Sternberg, the Thomas Hunt Morgan Professor of Biology at Caltech and leader of the WormBase project. "It has enabled studies that just would not have been possible without all of this information being available in a single place."

The bioinformatics resource serves a number of different communities. One of those is the group of scientists working with less-studied nematode species. "A lot of the research comes from individual researchers studying a specific problem. They generate a lot of facts and observations along the way; that information, if you aggregate it, ends up being quite valuable and extensive," Sternberg explains. "So, WormBase collects those bits of information and stores them in one place. It ends up being more than the sum of the parts."

WormBase also serves basic biomedical researchers who have used the database to investigate everything from cancer genes and axon growth to aging and kidney disease. Finally, WormBase is helpful for those scientists who study disease-causing nematodes and those species that infect and wreak havoc on crops and livestock.

Going forward, the WormBase project hopes to help researchers understand in greater detail the mechanisms through which nematode genes work together in pathways by making certain types of data more accessible and richer. "We have the parts list," Sternberg says. "Now the question is, how do all of the parts really work together to make the intricate mechanisms?"

WormBase is also working with the organizers of similar databases for other model organisms—such as the fruit fly and the mouse—that share many genes with nematodes. They are trying to align the databases, in terms of the formats they use and their interfaces, so that researchers can easily search all of them. So, for example, researchers studying a human gene who want information about that gene's counterparts in the worm or fruit fly would be able to switch between the databases quickly and easily. "We are also sharing the development of text-mining software that allows us to extract information from papers more efficiently," Sternberg says.

WormBase is an international consortium led by Caltech. Current collaborators include the Ontario Institute for Cancer Research in Toronto, Canada, and the European Bioinformatics Institute in Cambridge, England. 

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Team Led by Caltech Wins Second $10 Million Award for Research in Molecular Programming

During the past century, programmable technologies evolved from spinning gears and vacuum tubes to transistors and microchips. Now, a group of Caltech researchers and their colleagues at the University of Washington, Harvard University, and UC San Francisco are exploring how biologically important molecules—like DNA, RNA, and proteins—could be the next generation of programmable devices.

Erik Winfree, professor of computer science, computation, and neural systems, and bioengineering, along with collaborators at Caltech and the University of Washington, began the Molecular Programming Project (MPP) in 2008, as part of an NSF Expeditions in Computing award to develop practices for programming biomolecules—much like a computer code—to perform designated functions. Over the past five years, the researchers have programmed DNA to carry out a number of tasks, from solving basic math and pattern-recognition problems to more mechanical tasks like programming RNA and DNA to selectively amplify fluorescent signals for biological microscopy. Through these initial experiments, the researchers have shown that it is possible to systematically encode specialized tasks within DNA molecules.

"Computer science gave us this idea that many tasks can actually be done with different types of devices," Winfree says. For example, a 19th-century cash register and a 21st-century computer can both be used to calculate sums, though they perform the same task very differently. At first glance, writing a computer program and programming a DNA molecule may seem like very different endeavors, but "each one provides a systematic way of implementing automated behaviors, and they are both based on similar principles of information technology," Winfree says.

Expanding the team to include five additional faculty who bring expertise in structural and dynamic DNA nanotechnology, synthetic biology, computer-aided design, programming languages, and compilers, Winfree and his colleagues recently received a second Expeditions in Computing award to take their work in molecular programming to the next level: from proof-of-principle demonstrations to putting the technology in the hands of users in biology, chemistry, physics, and materials science.

The researchers aim to use molecular programming to establish general-purpose, reliable, and easy-to-use methods for engineering complex nanoscale systems from biomolecules. In the hands of users, these methods could be used to create novel self-assembling electronic and optical devices, powerful nanoscale tools for the study of biology, and programmable molecular circuits for the diagnosis and treatment of disease. In one application, the researchers hope to program DNA molecules to carry out recognition and logical circuitry for exquisitely targeted drug delivery, thus reducing drug side effects and increasing efficacy.

Today, the largest synthetic molecular programs—human-designed sequences of the A, T, C, and G bases that make up DNA—contain on the order of 60,000 bases. "That's comparable to the amount of RAM memory in my first computer, a 1983 Apple II+," says Winfree. Designed systems in the future will only become more complex, a challenge that MPP researchers aim to tackle by approaching biological systems with something computer scientists call the abstraction hierarchy.

"In some sense computer science is the art of managing complexity, because you design things that have billions of components, but a single person simply cannot understand all the details and interactions," he says. "Abstraction is a way of hiding a component's details while making it easy to incorporate into higher-order components—which, themselves, can also be abstracted. For example, you don't need to know the details of a multiplication circuit in order to use it to make a circuit for factoring." In the molecular world, the task might be different—like transporting a molecular cargo to a designated location—but abstraction is still essential for combining simpler systems into larger ones to perform tasks of greater complexity.

"Over the next several decades, the MPP seeks to develop the principles and practice for a new engineering discipline that will enable the function of molecules to be programmed with the ease and rigor that computers are today, while achieving the sophistication, complexity, and robustness evident in the programmable DNA, RNA, and protein machinery of biology," says Niles Pierce, professor of applied and computational mathematics and bioengineering at Caltech and member of the MPP.

To integrate these fields, the MPP has brought together an interdisciplinary team of computer scientists, chemists, electrical engineers, physicists, roboticists, mathematicians, and bioengineers—all of whom have a strong research interest in the intersection of information, biology, and the molecular world. The team will explore the potential of molecular programming from many perspectives.

"Because of the diverse expertise that is required to work on these challenges, the participating students and faculty come from an unusual array of fields," Pierce says. "It's a lot of fun to be in a room with this group of people to see where the discussions lead."

The 2013 Expeditions award was granted for the proposal "Molecular Programming Architectures, Abstractions, Algorithms, and Applications." Winfree and Pierce are joined on the project by four other collaborators at Caltech: Jehoshua (Shuki) Bruck, Gordon and Betty Moore Professor of Computation and Neural Systems and Electrical Engineering; Richard Murray, Thomas E. and Doris Everhart Professor of Control and Dynamical Systems and Bioengineering; Lulu Qian, assistant professor of bioengineering; and Paul Rothemund, senior research associate in bioengineering, computing and mathematical sciences, and computation and neural systems. Other collaborators include Eric Klavins and Georg Seelig from the University of Washington, Peng Yin and William Shih from Harvard, and Shawn Douglas from UC San Francisco.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Caltech Researchers Synthesize Catalyst Important In Nitrogen Fixation

Inspired by an enzyme in soil microorganisms, researchers develop first synthetic iron-based catalyst for the conversion of nitrogen to ammonia.

As farming strategies have evolved to provide food for the world's growing population, the manufacture of nitrogen fertilizers through the conversion of atmospheric nitrogen to ammonia has taken on increased importance.

The industrial technique used to make these fertilizers employs a chemical reaction that mirrors that of a natural process—nitrogen fixation. Unfortunately, vast amounts of energy, in the form of high heat and pressure, are required to drive the reaction. Now, inspired by the natural processes that take place in nitrogen-fixing microorganisms, researchers at Caltech have synthesized an iron-based catalyst that allows for nitrogen fixation under much milder conditions.

In the early 20th century, scientists discovered a way to artificially produce ammonia for the manufacture of commercial fertilizers, through a nitrogen fixation technique called the Haber-Bosch process. Today, this process is used industrially to produce more than 130 million tons of ammonia annually. Microorganisms in the soil that live near the roots of certain plants can produce a similar amount of ammonia each year—but instead of using high heat and pressure, they benefit from enzyme catalysts, called nitrogenases, that convert nitrogen from the air into ammonia at room temperature and atmospheric pressure.

In work described in the September 5 issue of Nature, Caltech graduate students John Anderson and Jon Rittle, under the supervision of their research adviser Jonas Peters, Bren Professor of Chemistry and executive officer for chemistry, have developed the first molecular iron complex that catalyzes nitrogen fixation, modeling the natural enzymes found in nitrogen-fixing soil organisms. The research may eventually lead to the development of more environmentally friendly methods of ammonia production.

Natural nitrogenase enzymes, which prime inert atmospheric nitrogen for fixation through the addition of electrons and protons, generally contain two metals, molybdenum and iron. Over decades of research, this duality has caused a number of debates about which metal was actually responsible for nitrogenase's catalytic activity. Since a few research groups had modest success in synthesizing molybdenum-based molecular catalysts, many in the field believed that the debate had been settled. The discovery by Peters' group that synthetic iron complexes are also capable of this type of catalytic activity will reopen the discussion.

This finding, along with a wealth of data from structural biologists, biochemists, and spectroscopists, suggests that it may be iron—and not molybdenum—that is the key player in the nitrogen fixation in natural enzymes. The iron catalyst discovered by Peters and his colleagues may also help unravel the mystery of how these enzymes perform this reaction at the molecular level.

"We've pursued this type of synthetic iron catalyst for about a decade, and have banged our heads against plenty of walls in the process. So have a lot of other very talented folks in my field, and some for much longer than a decade," Peters says.

The finding is a first for the field, but Peters says that their current iron-based catalyst has limitations—the Haber-Bosch process is still the industrial standard. "Now that we finally have an example that actually works, everyone wants to know: 'Can it be used to make ammonia more efficiently?' The simple answer, for now, is no. While we're delighted to finally have our hands on an iron fixation catalyst, it's pretty inefficient and dies quickly. But," he adds, "this catalyst is a really important advance for us; there is so much we will now be able to learn from it that we couldn't before."

Funding for the research outlined in the Nature paper, titled "Catalytic conversion of nitrogen to ammonia by an iron model complex," was provided by the National Institutes of Health and the Gordon and Betty Moore Foundation.

Frontpage Title: 
Researchers Synthesize Important Catalyst for Nitrogen Fixation
Contact: 
Writer: 
Exclude from News Hub: 
No

Made-to-Order Materials

Caltech engineers focus on the nano to create strong, lightweight materials

The lightweight skeletons of organisms such as sea sponges display a strength that far exceeds that of manmade products constructed from similar materials. Scientists have long suspected that the difference has to do with the hierarchical architecture of the biological materials—the way the silica-based skeletons are built up from different structural elements, some of which are measured on the scale of billionths of meters, or nanometers. Now engineers at the California Institute of Technology (Caltech) have mimicked such a structure by creating nanostructured, hollow ceramic scaffolds, and have found that the small building blocks, or unit cells, do indeed display remarkable strength and resistance to failure despite being more than 85 percent air.

"Inspired, in part, by hard biological materials and by earlier work by Toby Schaedler and a team from HRL Laboratories, Caltech, and UC Irvine on the fabrication of extremely lightweight microtrusses, we designed architectures with building blocks that are less than five microns long, meaning that they are not resolvable by the human eye," says Julia R. Greer, professor of materials science and mechanics at Caltech. "Constructing these architectures out of materials with nanometer dimensions has enabled us to decouple the materials' strength from their density and to fabricate so-called structural metamaterials which are very stiff yet extremely lightweight."

At the nanometer scale, solids have been shown to exhibit mechanical properties that differ substantially from those displayed by the same materials at larger scales. For example, Greer's group has shown previously that at the nanoscale, some metals are about 50 times stronger than usual, and some amorphous materials become ductile rather than brittle. "We are capitalizing on these size effects and using them to make real, three-dimensional structures," Greer says.

In an advance online publication of the journal Nature Materials, Greer and her students describe how the new structures were made and responded to applied forces.

The largest structure the team has fabricated thus far using the new method is a one-millimeter cube. Compression tests on the the entire structure indicate that not only the individual unit cells but also the complete architecture can be endowed with unusually high strength, depending on the material, which suggests that the general fabrication technique the researchers developed could be used to produce lightweight, mechanically robust small-scale components such as batteries, interfaces, catalysts, and implantable biomedical devices.

Greer says the work could fundamentally shift the way people think about the creation of materials. "With this approach, we can really start thinking about designing materials backward," she says. "I can start with a property and say that I want something that has this strength or this thermal conductivity, for example. Then I can design the optimal architecture with the optimal material at the relevant size and end up with the material I wanted."

The team first digitally designed a lattice structure featuring repeating octahedral unit cells—a design that mimics the type of periodic lattice structure seen in diatoms. Next, the researchers used a technique called two-photon lithography to turn that design into a three-dimensional polymer lattice. Then they uniformly coated that polymer lattice with thin layers of the ceramic material titanium nitride (TiN) and removed the polymer core, leaving a ceramic nanolattice. The lattice is constructed of hollow struts with walls no thicker than 75 nanometers.

"We are now able to design exactly the structure that we want to replicate and then process it in such a way that it's made out of almost any material class we'd like—for example, metals, ceramics, or semiconductors—at the right dimensions," Greer says.

In a second paper, scheduled for publication in the journal Advanced Engineering Materials, Greer's group demonstrates that similar nanostructured lattices could be made from gold rather than a ceramic. "Basically, once you've created the scaffold, you can use whatever technique will allow you to deposit a uniform layer of material on top of it," Greer says.

In the Nature Materials work, the team tested the individual octahedral cells of the final ceramic lattice and found that they had an unusually high tensile strength. Despite being repeatedly subjected to stress, the lattice cells did not break, whereas a much larger, solid piece of TiN would break at much lower stresses. Typical ceramics fail because of flaws—the imperfections, such as holes and voids, that they contain. "We believe the greater strength of these nanostructured materials comes from the fact that when samples become sufficiently small, their potential flaws also become very small, and the probability of finding a weak flaw within them becomes very low," Greer says. So although structural mechanics would predict that a cellular structure made of TiN would be weak because it has very thin walls, she says, "we can effectively trick this law by reducing the thickness or the size of the material and by tuning its microstructure, or atomic configurations."

Additional coauthors on the Nature Materials paper, "Fabrication and Deformation of Three-Dimensional Hollow Ceramic Nanostructures," are Dongchan Jang, who recently completed a postdoctoral fellowship in Greer's lab, Caltech graduate student Lucas Meza, and Frank Greer, formerly of the Jet Propulsion Laboratory (JPL). The work was supported by funding from the Dow-Resnick Innovation Fund at Caltech, DARPA's Materials with Controlled Microstructural Architecture program, and the Army Research Office through the Institute for Collaborative Biotechnologies at Caltech. Some of the work was carried out at JPL under a contract with NASA, and the Kavli Nanoscience Institute at Caltech provided support and infrastructure.

The lead author on the Advanced Engineering Materials paper, "Design and Fabrication of Hollow Rigid Nanolattices Via Two-Photon Lithography," is Caltech graduate student Lauren Montemayor. Meza is a coauthor. In addition to support from the Dow-Resnick Innovation Fund, this work received funding from an NSF Graduate Research Fellowship.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No

A Home for the Microbiome

Caltech biologists identify, for the first time, a mechanism by which beneficial bacteria reside and thrive in the gastrointestinal tract

The human body is full of tiny microorganisms—hundreds to thousands of species of bacteria collectively called the microbiome, which are believed to contribute to a healthy existence. The gastrointestinal (GI) tract—and the colon in particular—is home to the largest concentration and highest diversity of bacterial species. But how do these organisms persist and thrive in a system that is constantly in flux due to foods and fluids moving through it? A team led by California Institute of Technology (Caltech) biologist Sarkis Mazmanian believes it has found the answer, at least in one common group of bacteria: a set of genes that promotes stable microbial colonization of the gut.

A study describing the researchers' findings was published as an advance online publication of the journal Nature on August 18.    

"By understanding how these microbes colonize, we may someday be able to devise ways to correct for abnormal changes in bacterial communities—changes that are thought to be connected to disorders like obesity, inflammatory bowel disease and autism," says Mazmanian, a professor of biology at Caltech whose work explores the link between human gut bacteria and health.

The researchers began their study by running a series of experiments to introduce a genus of microbes called Bacteriodes to sterile, or germ-free, mice. Bacteriodes, a group of bacteria that has several dozen species, was chosen because it is one of the most abundant genuses in the human microbiome, can be cultured in the lab (unlike most gut bacteria), and can be genetically modified to introduce specific mutations.

"Bacteriodes are the only genus in the microbiome that fit these three criteria," Mazmanian says.

Lead author S. Melanie Lee (PhD '13), who was an MD/PhD student in Mazmanian's lab at the time of the research, first added a few different species of the bacteria to one mouse to see if they would compete with each other to colonize the gut. They appeared to peacefully coexist. Then, Lee colonized a mouse with one particular species, Bacteroides fragilis, and inoculated the mouse with the same exact species, to see if they would co-colonize the same host. To the researchers' surprise, the newly introduced bacteria could not maintain residence in the mouse's gut, despite the fact that the animal was already populated by the identical species.

"We know that this environment can house hundreds of species, so why the competition within the same species?" Lee says. "There certainly isn't a lack of space or nutrients, but this was an extremely robust and consistent finding when we tried to essentially 'super-colonize' the mice with one species."

To explain the results, Lee and the team developed what they called the "saturable niche hypothesis." The idea is that by saturating a specific habitat, the organism will effectively exclude others of the same species from occupying that niche. It will not, however, prevent other closely related species from colonizing the gut, because they have their own particular niches. A genetic screen revealed a set of previously uncharacterized genes—a system that the researchers dubbed commensal colonization factors (CCF)—that were both required and sufficient for species-specific colonization by B. fragilis.

But what exactly is the saturable niche? The colon, after all, is filled with a flowing mass of food, fecal matter and bacteria, which doesn't offer much for organisms to grab onto and occupy.

"Melanie hypothesized that this saturable niche was part of the host tissue"—that is, of the gut itself—Mazmanian says. "When she postulated this three to four years ago, it was absolute heresy, because other researchers in the field believed that all bacteria in our intestines lived in the lumen—the center of the gut—and made zero contact with the host…our bodies. The rationale behind this thinking was if bacteria did make contact, it would cause some sort of immune response."

Nonetheless, when the researchers used advanced imaging approaches to survey colonic tissue in mice colonized with B. fragilis, they found a small population of microbes living in miniscule pockets—or crypts—in the colon. Nestled within the crypts, the bacteria are protected from the constant flow of material that passes through the GI tract. To test whether or not the CCF system regulated bacterial colonization within the crypts, the team injected mutant bacteria—without the CCF system—into the colons of sterile mice. Those bacteria were unable to colonize the crypts.

"There is something in that crypt—and we don't know what it is yet—that normal B. fragilis can use to get a foothold via the CCF system," Mazmanian explains. "Finding the crypts is a huge advance in the field because it shows that bacteria do physically contact the host. And during all of the experiments that Melanie did, homeostasis, or a steady state, was maintained. So, contrary to popular belief, there was no evidence of inflammation as a result of the bacteria contacting the host. In fact, we believe these crypts are the permanent home of Bacteroides, and perhaps other classes of microbes."

He says that by pinpointing the CCF system as a mechanism for bacterial colonization and resilience, in addition to the discovery of crypts in the colon that are species specific, the current paper has solved longstanding mysteries in the field about how microbes establish and maintain long-term colonization.

"We've studied only a handful of organisms, and though they are numerically abundant, they are clearly not representative of all the organisms in the gut," Lee says. "A lot of those other bacteria don't have CCF genes, so the question now is: Do those organisms somehow rely on interactions with Bacteroides for their own colonization, or their replication rates, or their localization?"

Suspecting that Bacteroides are keystone species—a necessary factor for building the gut ecosystem—the researchers next plan to investigate whether or not functional abnormalities, such as the inability to adhere to crypts, could affect the entire microbiome and potentially lead to a diseased state in the body.

"This research highlights the notion that we are not alone. We knew that bacteria are in our gut, but this study shows that specific microbes are very intimately associated with our bodies," Mazmanian says. "They are living in very close proximity to our tissues, and we can't ignore microbial contributions to our biology or our health. They are a part of us."

Funding for the research outlined in the Nature paper, titled "Bacterial colonization factors control specificity and stability of the gut microbiota," was provided by the National Institutes of Health and the Crohn's and Colitis Foundation of America. Additional coauthors were Gregory Donaldson and Silva Boyajian from Caltech and Zbigniew Mikulski and Klaus Ley from the La Jolla Institute for Allergy and Immunology in La Jolla, California.

Writer: 
Katie Neith
Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech Team Produces Squeezed Light Using a Silicon Micromechanical System

One of the many counterintuitive and bizarre insights of quantum mechanics is that even in a vacuum—what many of us think of as an empty void—all is not completely still. Low levels of noise, known as quantum fluctuations, are always present. Always, that is, unless you can pull off a quantum trick. And that's just what a team led by researchers at the California Institute of Technology (Caltech) has done. The group has engineered a miniature silicon system that produces a type of light that is quieter at certain frequencies—meaning it has fewer quantum fluctuations—than what is usually present in a vacuum.

This special type of light with fewer fluctuations is known as squeezed light and is useful for making precise measurements at lower power levels than are required when using normal light. Although other research groups previously have produced squeezed light, the Caltech team's new system, which is miniaturized on a silicon microchip, generates the ultraquiet light in a way that can be more easily adapted to a variety of sensor applications.

"This system should enable a new set of precision microsensors capable of beating standard limits set by quantum mechanics," says Oskar Painter, a professor of applied physics at Caltech and the senior author on a paper that describes the system; the paper appears in the August 8 issue of the journal Nature. "Our experiment brings together, in a tiny microchip package, many aspects of work that has been done in quantum optics and precision measurement over the last 40 years."

The history of squeezed light is closely associated with Caltech. More than 30 years ago, Kip Thorne, Caltech's Richard P. Feynman Professor of Theoretical Physics, Emeritus, and physicist Carlton Caves (PhD '79) theorized that squeezed light would enable scientists to build more sensitive detectors that could make more precise measurements. A decade later, Caltech's Jeff Kimble, the William L. Valentine Professor and professor of physics, and his colleagues conducted some of the first experiments using squeezed light. Since then, the LIGO (Laser Interferometer Gravitational-Wave Observatory) Scientific Collaboration has invested heavily in research on squeezed light because of its potential to enhance the sensitivity of gravitational-wave detectors.

In the past, squeezed light has been made using so-called nonlinear materials, which have unusual optical properties. This latest Caltech work marks the first time that squeezed light has been produced using silicon, a standard material. "We work with a material that's very plain in terms of its optical properties," says Amir Safavi-Naeini (PhD '13), a graduate student in Painter's group and one of three lead authors on the new paper. "We make it special by engineering or punching holes into it, making these mechanical structures that respond to light in a very novel way. Of course, silicon is also a material that is technologically very amenable to fabrication and integration, enabling a great many applications in electronics."

In this new system, a waveguide feeds laser light into a cavity created by two tiny silicon beams. Once there, the light bounces back and forth a bit thanks to the engineered holes, which effectively turn the beams into mirrors. When photons—particles of light—strike the beams, they cause the beams to vibrate. And the particulate nature of the light introduces quantum fluctuations that affect those vibrations.

Typically, such fluctuations mean that in order to get a good reading of a signal, you would have to increase the power of the light to overcome the noise. But by increasing the power you also introduce other problems, such as introducing excess heat into the system.

Ideally, then, any measurements should be made with as low a power as possible. "One way to do that," says Safavi-Naeini, "is to use light that has less noise."

And that's exactly what the new system does; it has been engineered so that the light and beams interact strongly with each other—so strongly, in fact, that the beams impart the quantum fluctuations they experience back on the light. And, as is the case with the noise-canceling technology used, for example, in some headphones, the fluctuations that shake the beams interfere with the fluctuations of the light. They effectively cancel each other out, eliminating the noise in the light.

"This is a demonstration of what quantum mechanics really says: Light is neither a particle nor a wave; you need both explanations to understand this experiment," says Safavi-Naeini. "You need the particle nature of light to explain these quantum fluctuations, and you need the wave nature of light to understand this interference."

In the experiment, a detector measuring the noise in the light as a function of frequency showed that in a frequency range centered around 28 MHz, the system produces light with less noise than what is present in a vacuum—the standard quantum limit. "But one of the interesting things," Safavi-Naeini adds, "is that by carefully designing our structures, we can actually choose the frequency at which we go below the vacuum." Many signals are specific to a particular frequency range—a certain audio band in the case of acoustic signals, or, in the case of LIGO, a frequency intimately related to the dynamics of astrophysical objects such as circling black holes. Because the optical squeezing occurs near the mechanical resonance frequency where an individual device is most sensitive to external forces, this feature would enable the system studied by the Caltech team to be optimized for targeting specific signals.

"This new way of 'squeezing light' in a silicon micro-device may provide new, significant applications in sensor technology," said Siu Au Lee, program officer at the National Science Foundation, which provided support for the work through the Institute for Quantum Information and Matter, a Physics Frontier Center. "For decades, NSF's Physics Division has been supporting basic research in quantum optics, precision measurements and nanotechnology that laid the foundation for today's accomplishments."

The paper is titled "Squeezed light from a silicon micromechanical resonator." Along with Painter and Safavi-Naeini, additional coauthors on the paper include current and former Painter-group researchers Jeff Hill (PhD '13), Simon Gröblacher (both lead authors on the paper with Safavi-Naeini), and Jasper Chan (PhD '12), as well as Markus Aspelmeyer of the Vienna Center for Quantum Science and Technology and the University of Vienna. The work was also supported by the Gordon and Betty Moore Foundation, by DARPA/MTO ORCHID through a grant from the Air Force Office of Scientific Research, and by the Kavli Nanoscience Institute at Caltech.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Figuring Out Flow Dynamics

Engineers gain insight into turbulence formation and evolution in fluids

Turbulence is all around us—in the patterns that natural gas makes as it swirls through a transcontinental pipeline or in the drag that occurs as a plane soars through the sky. Reducing such turbulence on say, an airplane wing, would cut down on the amount of power the plane has to put out just to get through the air, thereby saving fuel. But in order to reduce turbulence—a very complicated phenomenon—you need to understand it, a task that has proven to be quite a challenge.

Since 2006, Beverley McKeon, professor of aeronautics and associate director of the Graduate Aerospace Laboratories at the California Institute of Technology (Caltech) and collaborator Ati Sharma, a senior lecturer in aerodynamics and flight mechanics at the University of Southampton in the U.K., have been working together to build models of turbulent flow. Recently, they developed a new and improved way of looking at the composition of turbulence near walls, the type of flow that dominates our everyday life.

Their research could lead to significant fuel savings, as a large amount of energy is consumed by ships and planes, for example, to counteract turbulence-induced drag. Finding a way to reduce that turbulence by 30 percent would save the global economy billions of dollars in fuel costs and associated emissions annually, says McKeon, a coauthor of a study describing the new method published online in the Journal of Fluid Mechanics on July 8.

"This kind of turbulence is responsible for a large amount of the fuel that is burned to move humans, freight, and fluids such as water, oil, and natural gas, around the world," she says. "[Caltech physicist Richard] Feynman described turbulence as 'one of the last unsolved problems of classical physics,' so it is also a major academic challenge."

Wall turbulence develops when fluids—liquid or gas—flow past solid surfaces at anything but the slowest flow rates. Progress in understanding and controlling wall turbulence has been somewhat incremental because of the massive range of scales of motion involved—from the width of a human hair to the height of a multi-floor building in relative terms—says McKeon, who has been studying turbulence for 16 years. Her latest work, however, now provides a way of analyzing a large-scale flow by breaking it down into discrete, more easily analyzed bits. 

McKeon and Sharma devised a new method of looking at wall turbulence by reformulating the equations that govern the motion of fluids—called the Navier-Stokes equations—into an infinite set of smaller, simpler subequations, or "blocks," with the characteristic that they can be simply added together to introduce more complexity and eventually get back to the full equations. But the benefit comes in what can be learned without needing the complexity of the full equations. Calling the results from analysis of each one of those blocks a "response mode," the researchers have shown that commonly observed features of wall turbulence can be explained by superposing, or adding together, a very small number of these response modes, even as few as three. 

In 2010, McKeon and Sharma showed that analysis of these blocks can be used to reproduce some of the characteristics of the velocity field, like the tendency of wall turbulence to favor eddies of certain sizes and distributions. Now, the researchers also are using the method to capture coherent vortical structure, caused by the interaction of distinct, horseshoe-shaped spinning motions that occur in turbulent flow. Increasing the number of blocks included in an analysis increases the complexity with which the vortices are woven together, McKeon says. With very few blocks, things look a lot like the results of an extremely expensive, real-flow simulation or a full laboratory experiment, she says, but the mathematics are simple enough to be performed, mode-by-mode, on a laptop computer.

"We now have a low-cost way of looking at the 'skeleton' of wall turbulence," says McKeon, explaining that similar previous experiments required the use of a supercomputer. "It was surprising to find that turbulence condenses to these essential building blocks so easily. It's almost like discovering a lens that you can use to focus in on particular patterns in turbulence."

Using this lens helps to reduce the complexity of what the engineers are trying to understand, giving them a template that can be used to try to visually—and mathematically—identify order from flows that may appear to be chaotic, she says. Scientists had proposed the existence of some of the patterns based on observations of real flows; using the new technique, these patterns now can be derived mathematically from the governing equations, allowing researchers to verify previous models of how turbulence works and improve upon those ideas.

Understanding how the formulation can capture the skeleton of turbulence, McKeon says, will allow the researchers to modify turbulence in order to control flow and, for example, reduce drag or noise.

"Imagine being able to shape not just an aircraft wing but the characteristics of the turbulence in the flow over it to optimize aircraft performance," she says. "It opens the doors for entirely new capabilities in vehicle performance that may reduce the consumption of even renewable or non-fossil fuels."

Funding for the research outlined in the Journal of Fluid Mechanics paper, titled "On coherent structure in wall turbulence," was provided by the Air Force Office of Scientific Research. The paper is the subject of a "Focus on Fluids" feature article that will appear in an upcoming print issue of the same journal and was written by Joseph Klewicki of the University of New Hampshire. 

Writer: 
Katie Neith
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news