Caltech Team Produces Squeezed Light Using a Silicon Micromechanical System

One of the many counterintuitive and bizarre insights of quantum mechanics is that even in a vacuum—what many of us think of as an empty void—all is not completely still. Low levels of noise, known as quantum fluctuations, are always present. Always, that is, unless you can pull off a quantum trick. And that's just what a team led by researchers at the California Institute of Technology (Caltech) has done. The group has engineered a miniature silicon system that produces a type of light that is quieter at certain frequencies—meaning it has fewer quantum fluctuations—than what is usually present in a vacuum.

This special type of light with fewer fluctuations is known as squeezed light and is useful for making precise measurements at lower power levels than are required when using normal light. Although other research groups previously have produced squeezed light, the Caltech team's new system, which is miniaturized on a silicon microchip, generates the ultraquiet light in a way that can be more easily adapted to a variety of sensor applications.

"This system should enable a new set of precision microsensors capable of beating standard limits set by quantum mechanics," says Oskar Painter, a professor of applied physics at Caltech and the senior author on a paper that describes the system; the paper appears in the August 8 issue of the journal Nature. "Our experiment brings together, in a tiny microchip package, many aspects of work that has been done in quantum optics and precision measurement over the last 40 years."

The history of squeezed light is closely associated with Caltech. More than 30 years ago, Kip Thorne, Caltech's Richard P. Feynman Professor of Theoretical Physics, Emeritus, and physicist Carlton Caves (PhD '79) theorized that squeezed light would enable scientists to build more sensitive detectors that could make more precise measurements. A decade later, Caltech's Jeff Kimble, the William L. Valentine Professor and professor of physics, and his colleagues conducted some of the first experiments using squeezed light. Since then, the LIGO (Laser Interferometer Gravitational-Wave Observatory) Scientific Collaboration has invested heavily in research on squeezed light because of its potential to enhance the sensitivity of gravitational-wave detectors.

In the past, squeezed light has been made using so-called nonlinear materials, which have unusual optical properties. This latest Caltech work marks the first time that squeezed light has been produced using silicon, a standard material. "We work with a material that's very plain in terms of its optical properties," says Amir Safavi-Naeini (PhD '13), a graduate student in Painter's group and one of three lead authors on the new paper. "We make it special by engineering or punching holes into it, making these mechanical structures that respond to light in a very novel way. Of course, silicon is also a material that is technologically very amenable to fabrication and integration, enabling a great many applications in electronics."

In this new system, a waveguide feeds laser light into a cavity created by two tiny silicon beams. Once there, the light bounces back and forth a bit thanks to the engineered holes, which effectively turn the beams into mirrors. When photons—particles of light—strike the beams, they cause the beams to vibrate. And the particulate nature of the light introduces quantum fluctuations that affect those vibrations.

Typically, such fluctuations mean that in order to get a good reading of a signal, you would have to increase the power of the light to overcome the noise. But by increasing the power you also introduce other problems, such as introducing excess heat into the system.

Ideally, then, any measurements should be made with as low a power as possible. "One way to do that," says Safavi-Naeini, "is to use light that has less noise."

And that's exactly what the new system does; it has been engineered so that the light and beams interact strongly with each other—so strongly, in fact, that the beams impart the quantum fluctuations they experience back on the light. And, as is the case with the noise-canceling technology used, for example, in some headphones, the fluctuations that shake the beams interfere with the fluctuations of the light. They effectively cancel each other out, eliminating the noise in the light.

"This is a demonstration of what quantum mechanics really says: Light is neither a particle nor a wave; you need both explanations to understand this experiment," says Safavi-Naeini. "You need the particle nature of light to explain these quantum fluctuations, and you need the wave nature of light to understand this interference."

In the experiment, a detector measuring the noise in the light as a function of frequency showed that in a frequency range centered around 28 MHz, the system produces light with less noise than what is present in a vacuum—the standard quantum limit. "But one of the interesting things," Safavi-Naeini adds, "is that by carefully designing our structures, we can actually choose the frequency at which we go below the vacuum." Many signals are specific to a particular frequency range—a certain audio band in the case of acoustic signals, or, in the case of LIGO, a frequency intimately related to the dynamics of astrophysical objects such as circling black holes. Because the optical squeezing occurs near the mechanical resonance frequency where an individual device is most sensitive to external forces, this feature would enable the system studied by the Caltech team to be optimized for targeting specific signals.

"This new way of 'squeezing light' in a silicon micro-device may provide new, significant applications in sensor technology," said Siu Au Lee, program officer at the National Science Foundation, which provided support for the work through the Institute for Quantum Information and Matter, a Physics Frontier Center. "For decades, NSF's Physics Division has been supporting basic research in quantum optics, precision measurements and nanotechnology that laid the foundation for today's accomplishments."

The paper is titled "Squeezed light from a silicon micromechanical resonator." Along with Painter and Safavi-Naeini, additional coauthors on the paper include current and former Painter-group researchers Jeff Hill (PhD '13), Simon Gröblacher (both lead authors on the paper with Safavi-Naeini), and Jasper Chan (PhD '12), as well as Markus Aspelmeyer of the Vienna Center for Quantum Science and Technology and the University of Vienna. The work was also supported by the Gordon and Betty Moore Foundation, by DARPA/MTO ORCHID through a grant from the Air Force Office of Scientific Research, and by the Kavli Nanoscience Institute at Caltech.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Figuring Out Flow Dynamics

Engineers gain insight into turbulence formation and evolution in fluids

Turbulence is all around us—in the patterns that natural gas makes as it swirls through a transcontinental pipeline or in the drag that occurs as a plane soars through the sky. Reducing such turbulence on say, an airplane wing, would cut down on the amount of power the plane has to put out just to get through the air, thereby saving fuel. But in order to reduce turbulence—a very complicated phenomenon—you need to understand it, a task that has proven to be quite a challenge.

Since 2006, Beverley McKeon, professor of aeronautics and associate director of the Graduate Aerospace Laboratories at the California Institute of Technology (Caltech) and collaborator Ati Sharma, a senior lecturer in aerodynamics and flight mechanics at the University of Southampton in the U.K., have been working together to build models of turbulent flow. Recently, they developed a new and improved way of looking at the composition of turbulence near walls, the type of flow that dominates our everyday life.

Their research could lead to significant fuel savings, as a large amount of energy is consumed by ships and planes, for example, to counteract turbulence-induced drag. Finding a way to reduce that turbulence by 30 percent would save the global economy billions of dollars in fuel costs and associated emissions annually, says McKeon, a coauthor of a study describing the new method published online in the Journal of Fluid Mechanics on July 8.

"This kind of turbulence is responsible for a large amount of the fuel that is burned to move humans, freight, and fluids such as water, oil, and natural gas, around the world," she says. "[Caltech physicist Richard] Feynman described turbulence as 'one of the last unsolved problems of classical physics,' so it is also a major academic challenge."

Wall turbulence develops when fluids—liquid or gas—flow past solid surfaces at anything but the slowest flow rates. Progress in understanding and controlling wall turbulence has been somewhat incremental because of the massive range of scales of motion involved—from the width of a human hair to the height of a multi-floor building in relative terms—says McKeon, who has been studying turbulence for 16 years. Her latest work, however, now provides a way of analyzing a large-scale flow by breaking it down into discrete, more easily analyzed bits. 

McKeon and Sharma devised a new method of looking at wall turbulence by reformulating the equations that govern the motion of fluids—called the Navier-Stokes equations—into an infinite set of smaller, simpler subequations, or "blocks," with the characteristic that they can be simply added together to introduce more complexity and eventually get back to the full equations. But the benefit comes in what can be learned without needing the complexity of the full equations. Calling the results from analysis of each one of those blocks a "response mode," the researchers have shown that commonly observed features of wall turbulence can be explained by superposing, or adding together, a very small number of these response modes, even as few as three. 

In 2010, McKeon and Sharma showed that analysis of these blocks can be used to reproduce some of the characteristics of the velocity field, like the tendency of wall turbulence to favor eddies of certain sizes and distributions. Now, the researchers also are using the method to capture coherent vortical structure, caused by the interaction of distinct, horseshoe-shaped spinning motions that occur in turbulent flow. Increasing the number of blocks included in an analysis increases the complexity with which the vortices are woven together, McKeon says. With very few blocks, things look a lot like the results of an extremely expensive, real-flow simulation or a full laboratory experiment, she says, but the mathematics are simple enough to be performed, mode-by-mode, on a laptop computer.

"We now have a low-cost way of looking at the 'skeleton' of wall turbulence," says McKeon, explaining that similar previous experiments required the use of a supercomputer. "It was surprising to find that turbulence condenses to these essential building blocks so easily. It's almost like discovering a lens that you can use to focus in on particular patterns in turbulence."

Using this lens helps to reduce the complexity of what the engineers are trying to understand, giving them a template that can be used to try to visually—and mathematically—identify order from flows that may appear to be chaotic, she says. Scientists had proposed the existence of some of the patterns based on observations of real flows; using the new technique, these patterns now can be derived mathematically from the governing equations, allowing researchers to verify previous models of how turbulence works and improve upon those ideas.

Understanding how the formulation can capture the skeleton of turbulence, McKeon says, will allow the researchers to modify turbulence in order to control flow and, for example, reduce drag or noise.

"Imagine being able to shape not just an aircraft wing but the characteristics of the turbulence in the flow over it to optimize aircraft performance," she says. "It opens the doors for entirely new capabilities in vehicle performance that may reduce the consumption of even renewable or non-fossil fuels."

Funding for the research outlined in the Journal of Fluid Mechanics paper, titled "On coherent structure in wall turbulence," was provided by the Air Force Office of Scientific Research. The paper is the subject of a "Focus on Fluids" feature article that will appear in an upcoming print issue of the same journal and was written by Joseph Klewicki of the University of New Hampshire. 

Writer: 
Katie Neith
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pushing Microscopy Beyond Standard Limits

Caltech engineers show how to make cost-effective, ultra-high-performance microscopes

Engineers at the California Institute of Technology (Caltech) have devised a method to convert a relatively inexpensive conventional microscope into a billion-pixel imaging system that significantly outperforms the best available standard microscope. Such a system could greatly improve the efficiency of digital pathology, in which specialists need to review large numbers of tissue samples. By making it possible to produce robust microscopes at low cost, the approach also has the potential to bring high-performance microscopy capabilities to medical clinics in developing countries.

"In my view, what we've come up with is very exciting because it changes the way we tackle high-performance microscopy," says Changhuei Yang, professor of electrical engineering, bioengineering and medical engineering at Caltech.  

Yang is senior author on a paper that describes the new imaging strategy, which appears in the July 28 early online version of the journal Nature Photonics.

Until now, the physical limitations of microscope objectives—their optical lenses— have posed a challenge in terms of improving conventional microscopes. Microscope makers tackle these limitations by using ever more complicated stacks of lens elements in microscope objectives to mitigate optical aberrations. Even with these efforts, these physical limitations have forced researchers to decide between high resolution and a small field of view on the one hand, or low resolution and a large field of view on the other. That has meant that scientists have either been able to see a lot of detail very clearly but only in a small area, or they have gotten a coarser view of a much larger area.

"We found a way to actually have the best of both worlds," says Guoan Zheng, lead author on the new paper and the initiator of this new microscopy approach from Yang's lab. "We used a computational approach to bypass the limitations of the optics. The optical performance of the objective lens is rendered almost irrelevant, as we can improve the resolution and correct for aberrations computationally."

Indeed, using the new approach, the researchers were able to improve the resolution of a conventional 2X objective lens to the level of a 20X objective lens. Therefore, the new system combines the field-of-view advantage of a 2X lens with the resolution advantage of a 20X lens. The final images produced by the new system contain 100 times more information than those produced by conventional microscope platforms. And building upon a conventional microscope, the new system costs only about $200 to implement.

"One big advantage of this new approach is the hardware compatibility," Zheng says, "You only need to add an LED array to an existing microscope. No other hardware modification is needed. The rest of the job is done by the computer."  

The new system acquires about 150 low-resolution images of a sample. Each image corresponds to one LED element in the LED array. Therefore, in the various images, light coming from known different directions illuminates the sample. A novel computational approach, termed Fourier ptychographic microscopy (FPM), is then used to stitch together these low-resolution images to form the high-resolution intensity and phase information of the sample—a much more complete picture of the entire light field of the sample.

Yang explains that when we look at light from an object, we are only able to sense variations in intensity. But light varies in terms of both its intensity and its phase, which is related to the angle at which light is traveling.

"What this project has developed is a means of taking low-resolution images and managing to tease out both the intensity and the phase of the light field of the target sample," Yang says. "Using that information, you can actually correct for optical aberration issues that otherwise confound your ability to resolve objects well."

The very large field of view that the new system can image could be particularly useful for digital pathology applications, where the typical process of using a microscope to scan the entirety of a sample can take tens of minutes. Using FPM, a microscope does not need to scan over the various parts of a sample—the whole thing can be imaged all at once. Furthermore, because the system acquires a complete set of data about the light field, it can computationally correct errors—such as out-of-focus images—so samples do not need to be rescanned.

"It will take the same data and allow you to perform refocusing computationally," Yang says.

The researchers say that the new method could have wide applications not only in digital pathology but also in everything from hematology to wafer inspection to forensic photography. Zheng says the strategy could also be extended to other imaging methodologies, such as X-ray imaging and electron microscopy.

The paper is titled "Wide-field, high-resolution Fourier ptychographic microscopy." Along with Yang and Zheng, Caltech graduate student Roarke Horstmeyer is also a coauthor. The work was supported by a grant from the National Institutes of Health.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Seeing Snow in Space

Caltech helps capture the first image of a frosty planetary-disk region

Although it might seem counterintuitive, if you get far enough away from a smoldering young star, you can actually find snow lines—frosty regions where gases are able to freeze and coat dust grains. Astronomers believe that these snow lines are critical to the process of planet formation.

Now an international team of researchers, including Caltech's Geoffrey Blake, has used the Atacama Large Millimeter/submillimeter Array (ALMA) to capture the first image of a snow line around a Sun-like star. The findings appear in the current issue of Science Express.

"This first direct imaging of such internal chemical structures in an analog of the young solar nebula was made possible by the extraordinary sensitivity and resolution of the not-yet-completed ALMA and builds on decades of pioneering research in millimeter-wave interferometry at the Caltech Owens Valley Radio Observatory, by universities now part of the Combined Array for Research in Millimeter-wave Astronomy, and by the Harvard-Smithsonian Submillimeter Array," says Blake, a professor of cosmochemistry and planetary science and professor of chemistry at Caltech. "The role of these facilities, in research, in technology development, and in education, along the road to ALMA cannot be overstated."

Since different gases freeze at different distances from the star, snow lines are thought to exist as concentric rings of grains encased in the various frozen gases—a ring of grains coated with water ice, a ring of grains coated with carbon dioxide, and so on. They might speed up planet formation by providing a source of solid material and by coating and protecting dust grains that would normally collide with one another and break apart.

Earlier this year, Blake and his group used spectrometers onboard the Spitzer Space Telescope and Herschel Space Observatory to constrain the location of the water snow line in a star known as TW Hydrae. The star is of particular interest because it is the nearest example of a gas- and dust-rich protoplanetary disk that may show similarities to our own solar system at an age of only 10 million years.

Snow lines have escaped direct imaging up until this point because of the obscuring effect of the hot gases that exist above and below them. But thanks to work at the Harvard-Smithsonian Submillimeter Array and at Caltech, the team had a good idea of where to begin looking. Additionally, the lead authors of the new paper, Chunhua "Charlie" Qi (PhD '01), now of the Harvard-Smithsonian Center for Astrophysics, and Karin Öberg (BS '05), currently at Harvard University, figured out a novel way to trace the presence of frozen carbon monoxide—a trick that enabled them to use ALMA to chemically highlight TW Hydrae's carbon monoxide snow line.

"The images from ALMA spectacularly confirm the presence of snow lines in disks," Blake says. "We are eagerly looking forward to additional studies with the full ALMA telescope—especially those targeting less volatile species such as water and organics that are critical to habitability."

The paper is titled "Imaging of the CO snow line in a solar nebula analog." A full press release about the work can be found here.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Evidence for a Martian Ocean

Researchers at the California Institute of Technology (Caltech) have discovered evidence for an ancient delta on Mars where a river might once have emptied into a vast ocean.

This ocean, if it existed, could have covered much of Mars's northern hemisphere—stretching over as much as a third of the planet.

"Scientists have long hypothesized that the northern lowlands of Mars are a dried-up ocean bottom, but no one yet has found the smoking gun," says Mike Lamb, an assistant professor of geology at Caltech and a coauthor of the paper describing the results. The paper was published online in the July 12 issue of the Journal of Geophysical Research.

Although the new findings are far from proof of the existence of an ancient ocean, they provide some of the strongest support yet, says Roman DiBiase, a postdoctoral scholar at Caltech and lead author of the paper.

Most of the northern hemisphere of Mars is flat and at a lower elevation than the southern hemisphere, and thus appears similar to the ocean basins found on Earth. The border between the lowlands and the highlands would have been the coastline for the hypothetical ocean.

The Caltech team used new high-resolution images from the Mars Reconnaissance Orbiter (MRO) to study a 100-square-kilometer area that sits right on this possible former coastline. Previous satellite images have shown that this area—part of a larger region called Aeolis Dorsa, which is about 1,000 kilometers away from Gale Crater where the Curiosity rover is now roaming—is covered in ridge-like features called inverted channels.

These inverted channels form when coarse materials like large gravel and cobbles are carried along rivers and deposited at their bottoms, building up over time. After the river dries up, the finer material—such as smaller grains of clay, silt, and sand—around the river erodes away, leaving behind the coarser stuff. This remaining sediment appears as today's ridge-like features, tracing the former river system.

When looked at from above, the inverted channels appear to fan out, a configuration that suggests one of three possible origins: the channels could have once been a drainage system in which streams and creeks flowed down a mountain and converged to form a larger river; the water could have flowed in the other direction, creating an alluvial fan, in which a single river channel branches into multiple smaller streams and creeks; or the channels are actually part of a delta, which is similar to an alluvial fan except that the smaller streams and creeks empty into a larger body of water such as an ocean.

To figure out which of these scenarios was most likely, the researchers turned to satellite images taken by the HiRISE camera on MRO. By taking pictures from different points in its orbit, the spacecraft was able to make stereo images that have allowed scientists to determine the topography of the martian surface. The HiRISE camera can pick out features as tiny as 25 centimeters long on the surface and the topographic data can distinguish changes in elevation at a resolution of 1 meter.

Using this data, the Caltech researchers analyzed the stratigraphic layers of the inverted channels, piecing together the history of how sediments were deposited along these ancient rivers and streams. The team was able to determine the slopes of the channels back when water was still coursing through them. Such slope measurements can reveal the direction of water flow—in this case, showing that the water was spreading out instead of converging, meaning the channels were part of an alluvial fan or a delta.

But the researchers also found evidence for an abrupt increase in slope of the sedimentary beds near the downstream end of the channels. That sort of steep slope is most common when a stream empties into a large body of water—suggesting that the channels are part of a delta and not an alluvial fan.

Scientists have discovered martian deltas before, but most are inside a geological boundary, like a crater. Water therefore would have most likely flowed into a lake enclosed by such a boundary and so did not provide evidence for an ocean.

But the newly discovered delta is not inside a crater or other confining boundary, suggesting that the water likely emptied into a large body of water like an ocean. "This is probably one of the most convincing pieces of evidence of a delta in an unconfined region—and a delta points to the existence of a large body of water in the northern hemisphere of Mars," DiBiase says. This large body of water could be the ocean that has been hypothesized to have covered a third of the planet. At the very least, the researchers say, the water would have covered the entire Aerolis Dorsa region, which spans about 100,000 square kilometers.

Of course, there are still other possible explanations. It is plausible, for instance, that at one time there was a confining boundary—such as a large crater—that was later erased, Lamb adds. But that would require a rather substantial geological process and would mean that the martian surface was more geologically active than has been previously thought.

The next step, the researchers say, is to continue exploring the boundary between the southern highlands and northern lowlands—the hypothetical ocean coastline—and analyze other sedimentary deposits to see if they yield more evidence for an ocean. 

"In our work and that of others—including the Curiosity rover—scientists are finding a rich sedimentary record on Mars that is revealing its past environments, which include rain, flowing water, rivers, deltas, and potentially oceans," Lamb says. "Both the ancient environments on Mars and the planet's sedimentary archive of these environments are turning out to be surprisingly Earth-like."

The title of the Journal of Geophysical Research paper is "Deltaic deposits at Aeolis Dorsa: Sedimentary evidence for a standing body of water on the northern plains of Mars." In addition to DiBiase and Lamb, the other authors of the paper are graduate students Ajay Limaye and Joel Scheingross, and Woodward Fischer, assistant professor of geobiology. This research was supported by the National Science Foundation, NASA, and Caltech.

Writer: 
Marcus Woo
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

New Research Sheds Light on M.O. of Unusual RNA Molecules

The genes that code for proteins—more than 20,000 in total—make up only about 1 percent of the complete human genome. That entire thing—not just the genes, but also genetic junk and all the rest—is coiled and folded up in any number of ways within the nucleus of each of our cells. Think, then, of the challenge that a protein or other molecule, like RNA, faces when searching through that material to locate a target gene.

Now a team of researchers led by newly arrived biologist Mitchell Guttman of the California Institute of Technology (Caltech) and Kathrin Plath of UCLA, has figured out how some RNA molecules take advantage of their position within the three-dimensional mishmash of genomic material to home in on targets. The research appears in the current issue of Science Express.

The findings suggest a unique role for a class of RNAs, called lncRNAs, which Guttman and his colleagues at the Broad Institute of MIT and Harvard first characterized in 2009. Until then, these lncRNAs—short for long, noncoding RNAs and pronounced "link RNAs"—had been largely overlooked because they lie in between the genes that code for proteins. Guttman and others have since shown that lncRNAs scaffold, or bring together and organize, key proteins involved in the packaging of genetic information to regulate gene expression—controlling cell fate in some stem cells, for example.

In the new work, the researchers found that lncRNAs can easily locate and bind to nearby genes. Then, with the help of proteins that reorganize genetic material, the molecules can pull in additional related genes and move to new sites, building up a "compartment" where many genes can be regulated all at once.

"You can now think about these lncRNAs as a way to bring together genes that are needed for common function into a single physical region and then regulate them as a set, rather than individually," Guttman says. "They are not just scaffolds of proteins but actual organizers of genes."

The new work focused on Xist, a lncRNA molecule that has long been known to be involved in turning off one of the two X chromosomes in female mammals (something that must happen in order for the genome to function properly). Quite a bit has been uncovered about how Xist achieves this silencing act. We know, for example, that it binds to the X chromosome; that it recruits a chromatin regulator to help it organize and modify the structure of the chromatin; and that certain distinct regions of the RNA are necessary to do all of this work. Despite this knowledge, it had been unknown at the molecular level how Xist actually finds its targets and spreads across the X chromosome.

To gain insight into that process, Guttman and his colleagues at the Broad Institute developed a method called RNA Antisense Purification (RAP) that, by sequencing DNA at high resolution, gave them a way to map out exactly where different lncRNAs go. Then, working with Plath's group at UCLA, they used their method to watch in high resolution as Xist was activated in undifferentiated mouse stem cells, and the process of X-chromosome silencing proceeded.

"That's where this got really surprising," Guttman says. "It wasn't that somehow this RNA just went everywhere, searching for its target. There was some method to its madness. It was clear that this RNA actually used its positional information to find things that were very far away from it in genome space, but all of those genes that it went to were really close to it in three-dimensional space."

Before Xist is activated, X-chromosome genes are all spread out. But, the researchers found, once Xist is turned on, it quickly pulls in genes, forming a cloud. "And it's not just that the expression levels of Xist get higher and higher," Guttman says. "It's that Xist brings in all of these related genes into a physical nuclear structure. All of these genes then occupy a single territory."

The researchers found that a specific region of Xist, known as the A-repeat domain, that is known to be vital for the lncRNA's ability to silence X-chromosome genes is also needed to pull in all the genes that it needs to silence. When the researchers deleted the domain, the X chromosome did not become inactivated, because the silencing compartment did not form.

One of the most exciting aspects of the new research, Guttman says, is that it has implications beyond just explaining how Xist works. "In our paper, we talk a lot about Xist, but these results are likely to be general to other lncRNAs," he says. He adds that the work provides one of the first direct pieces of evidence to explain what makes lncRNAs special. "LncRNAs, unlike proteins, really can use their genomic information—their context, their location—to act, to bring together targets," he says. "That makes them quite unique."  

The new paper is titled "The Xist lncRNA exploits three-dimensional genome architecture to spread across the X-chromosome." Along with Guttman and Plath, additional coauthors are Jesse M. Engreitz, Patrick McDonel, Alexander Shishkin, Klara Sirokman, Christine Surka, Sabah Kadri, Jeffrey Xing, Along Goren, and Eric Lander of the Broad Institute of Harvard and MIT; as well as Amy Pandya-Jones of UCLA. The work was funded by an NIH Director's Early Independence Award, the National Human Genome Research Institute Centers of Excellence in Genomic Sciences, the California Institute for Regenerative Medicine, and funds from the Broad Institute and from UCLA's Eli and Edythe Broad Center for Regenerative Medicine and Stem Cell Research. 

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No

Psychology Influences Markets

When it comes to economics versus psychology, score one for psychology.

Economists argue that markets usually reflect rational behavior—that is, the dominant players in a market, such as the hedge-fund managers who make billions of dollars' worth of trades, almost always make well-informed and objective decisions. Psychologists, on the other hand, say that markets are not immune from human irrationality, whether that irrationality is due to optimism, fear, greed, or other forces.

Now, a new analysis published the week of July 1 in the online issue of the Proceedings of the National Academy of Sciences (PNAS) supports the latter case, showing that markets are indeed susceptible to psychological phenomena. "There's this tug-of-war between economics and psychology, and in this round, psychology wins," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at the California Institute of Technology (Caltech) and the corresponding author of the paper.

Indeed, it is difficult to claim that markets are immune to apparent irrationality in human behavior. "The recent financial crisis really has shaken a lot of people's faith," Camerer says. Despite the faith of many that markets would organize allocations of capital in ways that are efficient, he notes, the government still had to bail out banks, and millions of people lost their homes.

In their analysis, the researchers studied an effect called partition dependence, in which breaking down—or partitioning—the possible outcomes of an event in great detail makes people think that those outcomes are more likely to happen. The reason, psychologists say, is that providing specific scenarios makes them more explicit in people's minds. "Whatever we're thinking about, seems more likely," Camerer explains.

For example, if you are asked to predict the next presidential election, you may say that a Democrat has a 50/50 chance of winning and a Republican has a 50/50 chance of winning. But if you are asked about the odds that a particular candidate from each party might win—for example, Hillary Clinton versus Chris Christie—you are likely to envision one of them in the White House, causing you to overestimate his or her odds.

The researchers looked for this bias in a variety of prediction markets, in which people bet on future events. In these markets, participants buy and sell claims on specific outcomes, and the prices of those claims—as set by the market—reflect people's beliefs about how likely it is that each of those outcomes will happen. Say, for example, that the price for a claim that the Miami Heat will win 16 games during the NBA playoffs is $6.50 for a $10 return. That means that, in the collective judgment of the traders, Miami has a 65 percent chance of winning 16 games.

The researchers created two prediction markets via laboratory experiments and studied two others in the real world. In one lab experiment, which took place in 2006, volunteers traded claims on how many games an NBA team would win during the 2006 playoffs and how many goals a team would score in the 2006 World Cup. The volunteers traded claims on 16 teams each for the NBA playoffs and the World Cup.

In the basketball case, one group of volunteers was asked to bet on whether the Miami Heat would win 4–7 playoff games, 8–11 games, or some other range. Another group was given a range of 4–11 games, which combined the two intervals offered to the first group. Then, the volunteers traded claims on each of the intervals within their respective groups. As with all prediction markets, the price of a traded claim reflected the traders' estimations of whether the total number of games won by the Heat would fall within a particular range.

Economic theory says that the first group's perceived probability of the Heat winning 4–7 games and its perceived probability of winning 8–11 games should add up to a total close to the second group's perceived probability of the team winning 4–11 games. But when they added the numbers up, the researchers found instead that the first group thought the likelihood of the team winning 4–7 or 8–11 games higher than did the second group, which was asked about the probability of them winning 4–11 games. All of this suggests that framing the possible outcomes in terms of more specific intervals caused people to think that those outcomes were more likely.

The researchers observed similar results in a second, similar lab experiment, and in two studies of natural markets—one involving a series of 153 prediction markets run by Deutsche Bank and Goldman Sachs, and another involving long-shot horses in horse races.

People tend to bet more money on a long-shot horse, because of its higher potential payoff, and they also tend to overestimate the chance that such a horse will win. Statistically, however, a horse's chance of winning a particular race is the same regardless of how many other horses it's racing against—a horse who habitually wins just five percent of the time will continue to do so whether it is racing against fields of 5 or of 11. But when the researchers looked at horse-race data from 1992 through 2001—a total of 6.3 million starts—they found that bettors were subject to the partition bias, believing that long-shot horses had higher odds of winning when they were racing against fewer horses.

While partition dependence has been looked at in the past in specific lab experiments, it hadn't been studied in prediction markets, Camerer says. What makes this particular analysis powerful is that the researchers observed evidence for this phenomenon in a wide range of studies—short, well-controlled laboratory experiments; markets involving intelligent, well-informed traders at major financial institutions; and nine years of horse-racing data.

The title of the PNAS paper is "How psychological framing affects economic market prices in the lab and field." In addition to Camerer, the other authors are Ulrich Sonnemann and Thomas Langer at the University of Münster, Germany, and Craig Fox at UCLA. Their research was supported by the German Research Foundation, the National Science Foundation, the Gordon and Betty Moore Foundation, and the Human Frontier Science Program.

Writer: 
Marcus Woo
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

A Stepping-Stone for Oxygen on Earth

Caltech researchers find evidence of an early manganese-oxidizing photosystem

For most terrestrial life on Earth, oxygen is necessary for survival. But the planet's atmosphere did not always contain this life-sustaining substance, and one of science's greatest mysteries is how and when oxygenic photosynthesis—the process responsible for producing oxygen on Earth through the splitting of water molecules—first began. Now, a team led by geobiologists at the California Institute of Technology (Caltech) has found evidence of a precursor photosystem involving manganese that predates cyanobacteria, the first group of organisms to release oxygen into the environment via photosynthesis.  

The findings, outlined in the June 24 early edition of the Proceedings of the National Academy of Sciences (PNAS), strongly support the idea that manganese oxidation—which, despite the name, is a chemical reaction that does not have to involve oxygen—provided an evolutionary stepping-stone for the development of water-oxidizing photosynthesis in cyanobacteria.

"Water-oxidizing or water-splitting photosynthesis was invented by cyanobacteria approximately 2.4 billion years ago and then borrowed by other groups of organisms thereafter," explains Woodward Fischer, assistant professor of geobiology at Caltech and a coauthor of the study. "Algae borrowed this photosynthetic system from cyanobacteria, and plants are just a group of algae that took photosynthesis on land, so we think with this finding we're looking at the inception of the molecular machinery that would give rise to oxygen."

Photosynthesis is the process by which energy from the sun is used by plants and other organisms to split water and carbon dioxide molecules to make carbohydrates and oxygen. Manganese is required for water splitting to work, so when scientists began to wonder what evolutionary steps may have led up to an oxygenated atmosphere on Earth, they started to look for evidence of manganese-oxidizing photosynthesis prior to cyanobacteria. Since oxidation simply involves the transfer of electrons to increase the charge on an atom—and this can be accomplished using light or O2—it could have occurred before the rise of oxygen on this planet.

"Manganese plays an essential role in modern biological water splitting as a necessary catalyst in the process, so manganese-oxidizing photosynthesis makes sense as a potential transitional photosystem," says Jena Johnson, a graduate student in Fischer's laboratory at Caltech and lead author of the study.

To test the hypothesis that manganese-based photosynthesis occurred prior to the evolution of oxygenic cyanobacteria, the researchers examined drill cores (newly obtained by the Agouron Institute) from 2.415 billion-year-old South African marine sedimentary rocks with large deposits of manganese.

Manganese is soluble in seawater. Indeed, if there are no strong oxidants around to accept electrons from the manganese, it will remain aqueous, Fischer explains, but the second it is oxidized, or loses electrons, manganese precipitates, forming a solid that can become concentrated within seafloor sediments.

"Just the observation of these large enrichments—16 percent manganese in some samples—provided a strong implication that the manganese had been oxidized, but this required confirmation," he says.

To prove that the manganese was originally part of the South African rock and not deposited there later by hydrothermal fluids or some other phenomena, Johnson and colleagues developed and employed techniques that allowed the team to assess the abundance and oxidation state of manganese-bearing minerals at a very tiny scale of 2 microns.

"And it's warranted—these rocks are complicated at a micron scale!" Fischer says. "And yet, the rocks occupy hundreds of meters of stratigraphy across hundreds of square kilometers of ocean basin, so you need to be able to work between many scales—very detailed ones, but also across the whole deposit to understand the ancient environmental processes at work."

Using these multiscale approaches, Johnson and colleagues demonstrated that the manganese was original to the rocks and first deposited in sediments as manganese oxides, and that manganese oxidation occurred over a broad swath of the ancient marine basin during the entire timescale captured by the drill cores.

"It's really amazing to be able to use X-ray techniques to look back into the rock record and use the chemical observations on the microscale to shed light on some of the fundamental processes and mechanisms that occurred billions of years ago," says Samuel Webb, coauthor on the paper and beam line scientist at the SLAC National Accelerator Laboratory at Stanford University, where many of the study's experiments took place. "Questions regarding the evolution of the photosynthetic pathway and the subsequent rise of oxygen in the atmosphere are critical for understanding not only the history of our own planet, but also the basics of how biology has perfected the process of photosynthesis."

Once the team confirmed that the manganese had been deposited as an oxide phase when the rock was first forming, they checked to see if these manganese oxides were actually formed before water-splitting photosynthesis or if they formed after as a result of reactions with oxygen. They used two different techniques to check whether oxygen was present. It was not—proving that water-splitting photosynthesis had not yet evolved at that point in time. The manganese in the deposits had indeed been oxidized and deposited before the appearance of water-splitting cyanobacteria. This implies, the researchers say, that manganese-oxidizing photosynthesis was a stepping-stone for oxygen-producing, water-splitting photosynthesis.

"I think that there will be a number of additional experiments that people will now attempt to try and reverse engineer a manganese photosynthetic photosystem or cell," Fischer says. "Once you know that this happened, it all of a sudden gives you reason to take more seriously an experimental program aimed at asking, 'Can we make a photosystem that's able to oxidize manganese but doesn't then go on to split water? How does it behave, and what is its chemistry?' Even though we know what modern water splitting is and what it looks like, we still don't know exactly how it works. There is still a major discovery to be made to find out exactly how the catalysis works, and now knowing where this machinery comes from may open new perspectives into its function—an understanding that could help target technologies for energy production from artificial photosynthesis. "

Next up in Fischer's lab, Johnson plans to work with others to try and mutate a cyanobacteria to "go backwards" and perform manganese-oxidizing photosynthesis. The team also plans to investigate a set of rocks from western Australia that are similar in age to the samples used in the current study and may also contain beds of manganese. If their current study results are truly an indication of manganese-oxidizing photosynthesis, they say, there should be evidence of the same processes in other parts of the world.

"Oxygen is the backdrop on which this story is playing out on, but really, this is a tale of the evolution of this very intense metabolism that happened once—an evolutionary singularity that transformed the planet," Fischer says. "We've provided insight into how the evolution of one of these remarkable molecular machines led up to the oxidation of our planet's atmosphere, and now we're going to follow up on all angles of our findings."

Funding for the research outlined in the PNAS paper, titled "Manganese-oxidizing photosynthesis before the rise of cyanobacteria," was provided by the Agouron Institute, NASA's Exobiology Branch, the David and Lucile Packard Foundation, and the National Science Foundation Graduate Research Fellowship program. Joseph Kirschvink, Nico and Marilyn Van Wingen Professor of Geobiology at Caltech, also contributed to the study along with Katherine Thomas and Shuhei Ono from the Massachusetts Institute of Technology.

Writer: 
Katie Neith
Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Beauty and the Brain: Electrical Stimulation of the Brain Makes You Perceive Faces as More Attractive

Findings may lead to promising ways to treat and study neuropsychiatric disorders

Beauty is in the eye of the beholder, and—as researchers have now shown—in the brain as well.

The researchers, led by scientists at the California Institute of Technology (Caltech), have used a well-known, noninvasive technique to electrically stimulate a specific region deep inside the brain previously thought to be inaccessible. The stimulation, the scientists say, caused volunteers to judge faces as more attractive than before their brains were stimulated.

Being able to effect such behavioral changes means that this electrical stimulation tool could be used to noninvasively manipulate deep regions of the brain—and, therefore, that it could serve as a new approach to study and treat a variety of deep-brain neuropsychiatric disorders, such as Parkinson's disease and schizophrenia, the researchers say.

"This is very exciting because the primary means of inducing these kinds of deep-brain changes to date has been by administering drug treatments," says Vikram Chib, a postdoctoral scholar who led the study, which is being published in the June 11 issue of the journal Translational Psychiatry. "But the problem with drugs is that they're not location-specific—they act on the entire brain." Thus, drugs may carry unwanted side effects or, occasionally, won't work for certain patients—who then may need invasive treatments involving the implantation of electrodes into the brain.

So Chib and his colleagues turned to a technique called transcranial direct-current stimulation (tDCS), which, Chib notes, is cheap, simple, and safe. In this method, an anode and a cathode are placed at two different locations on the scalp. A weak electrical current—which can be powered by a nine-volt battery—runs from the cathode, through the brain, and to the anode. The electrical current is a mere 2 milliamps—10,000 times less than the 20 amps typically available from wall sockets. "All you feel is a little bit of tingling, and some people don't even feel that," he says.

"There have been many studies employing tDCS to affect behavior or change local neural activity," says Shinsuke Shimojo, the Gertrude Baltimore Professor of Experimental Psychology and a coauthor of the paper. For example, the technique has been used to treat depression and to help stroke patients rehabilitate their motor skills. "However, to our knowledge, virtually none of the previous studies actually examined and correlated both behavior and neural activity," he says. These studies also targeted the surface areas of the brain—not much more than a centimeter deep—which were thought to be the physical limit of how far tDCS could reach, Chib adds.

The researchers hypothesized that they could exploit known neural connections and use tDCS to stimulate deeper regions of the brain. In particular, they wanted to access the ventral midbrain—the center of the brain's reward-processing network, and about as deep as you can go. It is thought to be the source of dopamine, a chemical whose deficiency has been linked to many neuropsychiatric disorders.

The ventral midbrain is part of a neural circuit that includes the dorsolateral prefrontal cortex (DLPFC), which is located just above the temples, and the ventromedial prefrontal cortex (VMPFC), which is behind the forehead. Decreasing activity in the DLPFC boosts activity in the VMPFC, which in turn bumps up activity in the ventral midbrain. To manipulate the ventral midbrain, therefore, the researchers decided to try using tDCS to deactivate the DLPFC and activate the VMPFC.

To test their hypothesis, the researchers asked volunteers to judge the attractiveness of groups of faces both before and after the volunteers' brains had been stimulated with tDCS. Judging facial attractiveness is one of the simplest, most primal tasks that can activate the brain's reward network, and difficulty in evaluating faces and recognizing facial emotions is a common symptom of neuropsychiatric disorders. The study participants rated the faces while inside a functional magnetic resonance imaging (fMRI) scanner, which allowed the researchers to evaluate any changes in brain activity caused by the stimulation.

A total of 99 volunteers participated in the tDCS experiment and were divided into six stimulation groups. In the main stimulation group, composed of 19 subjects, the DLPFC was deactivated and the VMPFC activated with a stimulation configuration that the researchers theorized would ultimately activate the ventral midbrain. The other groups were used to test different stimulation configurations. For example, in one group, the placement of the cathode and anode were switched so that the DLPFC was activated and the VMPFC was deactivated—the opposite of the main group. Another was a "sham" group, in which the electrodes were placed on volunteers' heads, but no current was run.

Those in the main group rated the faces presented after stimulation as more attractive than those they saw before stimulation. There were no differences in the ratings from the control groups. This change in ratings in the main group suggests that tDCS is indeed able to activate the ventral midbrain, and that the resulting changes in brain activity in this deep-brain region are associated with changes in the evaluation of attractiveness.

In addition, the fMRI scans revealed that tDCS strengthened the correlation between VMPFC activity and ventral midbrain activity. In other words, stimulation appeared to enhance the neural connectivity between the two brain areas. And for those who showed the strongest connectivity, tDCS led to the biggest change in attractiveness ratings. Taken together, the researchers say these results show that tDCS is causing those shifts in perception by manipulating the ventral midbrain via the DLPFC and VMPFC.

"The fact that we haven't had a way to noninvasively manipulate a functional circuit in the brain has been a fundamental bottleneck in human behavioral neuroscience," Shimojo says. This new work, he adds, represents a big first step in removing that bottleneck.

Using tDCS to study and treat neuropsychiatric disorders hinges on the assumption that the technique directly influences dopamine production in the ventral midbrain, Chib explains. But because fMRI can't directly measure dopamine, this study was unable to make that determination. The next step, then, is to use methods that can—such as positron emission tomography (PET) scans.

More work also needs to be done to see how tDCS may be used for treating disorders and to precisely determine the duration of the stimulation effects—as a rule of thumb, the influence of tDCS lasts for twice the exposure time, Chib says. Future studies will also be needed to see what other behaviors this tDCS method can influence. Ultimately, clinical tests will be needed for medical applications.

In addition to Chib and Shimojo, the other authors of the paper are Kyongsik Yun, a former postdoctoral scholar at Caltech who is now at the Korea Advanced Institute of Science and Technology (KAIST), and Hidehiko Takahashi of the Kyoto University Graduate School of Medicine. The title of the Translational Psychiatry paper is "Noninvasive remote activation of the ventral midbrain by transcranial direct current stimulation of prefrontal cortex." This work was funded by the Exploratory Research for Advanced Technology (ERATO) and CREST programs of the Japan Science and Technology Agency (JST); the Caltech-Tamagawa gCOE (Global Center of Excellence) program; and a Japan-U.S. Brain Research Cooperative Program grant.

Writer: 
Marcus Woo
Frontpage Title: 
Beauty and the Brain
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Keeping Stem Cells Strong

Caltech biologists show that an RNA molecule protects stem cells during inflammation

When infections occur in the body, stem cells in the blood often jump into action by multiplying and differentiating into mature immune cells that can fight off illness. But repeated infections and inflammation can deplete these cell populations, potentially leading to the development of serious blood conditions such as cancer. Now, a team of researchers led by biologists at the California Institute of Technology (Caltech) has found that, in mouse models, the molecule microRNA-146a (miR-146a) acts as a critical regulator and protector of blood-forming stem cells (called hematopoietic stem cells, or HSCs) during chronic inflammation, suggesting that a deficiency of miR-146a may be one important cause of blood cancers and bone marrow failure.

The team came to this conclusion by developing a mouse model that lacks miR-146a. RNA is a polymer structured like DNA, the chemical that makes up our genes. MicroRNAs, as the name implies, are a class of very short RNAs that can interfere with or regulate the activities of particular genes. When subjected to a state of chronic inflammation, mice lacking miR-146a showed a decline in the overall number and quality of their HSCs; normal mice producing the molecule, in contrast, were better able to maintain their levels of HSCs despite long-term inflammation. The researchers' findings are outlined in the May 21 issue of the new journal eLIFE.

"This mouse with genetic deletion of miR-146a is a wonderful model with which to understand chronic-inflammation-driven tumor formation and hematopoietic stem cell biology during chronic inflammation," says Jimmy Zhao, the lead author of the study and an MD/PhD student in the Caltech laboratory of David Baltimore, the Robert Andrews Millikan Professor of Biology. "It was surprising that a single microRNA plays such a crucial role. Deleting it produced a profound and dramatic pathology, which clearly highlights the critical and indispensable function of miR-146a in guarding the quality and longevity of HSCs."

The study findings provide, for the first time, a detailed molecular connection between chronic inflammation, and bone marrow failure and diseases of the blood. These findings could lead to the discovery and development of anti-inflammatory molecules that could be used as therapeutics for blood diseases. In fact, the researchers believe that miR-146a itself may ultimately become a very effective anti-inflammatory molecule, once RNA molecules or mimetics can be delivered more efficiently to the cells of interest.

The new mouse model, Zhao says, also mimics important aspects of human myelodysplastic syndrome (MDS)—a form of pre-leukemia that often causes severe anemia, can require frequent blood transfusions, and usually leads to acute myeloid leukemia. Further study of the model could lead to a better understanding of the condition and therefore potential new treatments for MDS.

"This study speaks to the importance of keeping chronic inflammation in check and provides a good rationale for broad use of safer and more effective anti-inflammatory molecules," says Baltimore, who is a coauthor of the study. "If we can understand what cell types and proteins are critically important in chronic-inflammation-driven tumor formation and stem cell exhaustion, we can potentially design better and safer drugs to intervene."

Funding for the research outlined in the eLIFE paper, titled "MicroRNA-146a acts as a guardian of the quality and longevity of hematopoietic stem cells in mice," was provided by the National Institute of Allergy and Infectious Disease; the National Heart, Lung, and Blood Institute; and the National Cancer Institute. Yvette Garcia-Flores, the lead technician in Baltimore's lab, also contributed to the study along with Dinesh Rao from UCLA and Ryan O'Connell from the University of Utah. eLIFE, a new open-access, high-impact journal, is backed by three of the world's leading funding agencies, the Howard Hughes Medical Institute, the Max Planck Society, and the Wellcome Trust. 

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news