Focusing on Faces

Researchers find neurons in amygdala of autistic individuals have reduced sensitivity to eye region of others' faces

Difficulties in social interaction are considered to be one of the behavioral hallmarks of autism spectrum disorders (ASDs). Previous studies have shown these difficulties to be related to differences in how the brains of autistic individuals process sensory information about faces. Now, a group of researchers led by California Institute of Technology (Caltech) neuroscientist Ralph Adolphs has made the first recordings of the firings of single neurons in the brains of autistic individuals, and has found specific neurons in a region called the amygdala that show reduced processing of the eye region of faces. Furthermore, the study found that these same neurons responded more to mouths than did the neurons seen in the control-group individuals.

"We found that single brain cells in the amygdala of people with autism respond differently to faces in a way that explains many prior behavioral observations," says Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology at Caltech and coauthor of a study in the November 20 issue of Neuron that outlines the team's findings. "We believe this shows that abnormal functioning in the amygdala is a reason that people with autism process faces abnormally."

The amygdala has long been known to be important for the processing of emotional reactions. To make recordings from this part of the brain, Adolphs and lead author Ueli Rutishauser, assistant professor in the departments of neurosurgery and neurology at Cedars-Sinai Medical Center and visiting associate in biology at Caltech, teamed up with Adam Mamelak, professor of neurosurgery and director of functional neurosurgery at Cedars-Sinai, and neurosurgeon Ian Ross at Huntington Memorial Hospital in Pasadena, California, to recruit patients with epilepsy who had electrodes implanted in their medial temporal lobes—the area of the brain where the amygdala is located—to help identify the origin of their seizures. Epileptic seizures are caused by a burst of abnormal electric activity in the brain, which the electrodes are designed to detect. It turns out that epilepsy and ASD sometimes go together, and so the researchers were able to identify two of the epilepsy patients who also had a diagnosis of ASD.

By using the implanted electrodes to record the firings of individual neurons, the researchers were able to observe activity as participants looked at images of different facial regions, and then correlate the neuronal responses with the pictures. In the control group of epilepsy patients without autism, the neurons responded most strongly to the eye region of the face, whereas in the two ASD patients, the neurons responded most strongly to the mouth region. Moreover, the effect was present in only a specific subset of the neurons. In contrast, a different set of neurons showed the same response in both groups when whole faces were shown.

"It was surprising to find such clear abnormalities at the level of single cells," explains Rutishauser. "We, like many others, had thought that the neurological abnormalities that contribute to autism were spread throughout the brain, and that it would be difficult to find highly specific correlates. Not only did we find highly specific abnormalities in single-cell responses, but only a certain subset of cells responded that way, while another set showed typical responses to faces. This specificity of these cell populations was surprising and is, in a way, very good news, because it suggests the existence of specific mechanisms for autism that we can potentially trace back to their genetic and environmental causes, and that one could imagine manipulating for targeted treatment."

"We can now ask how these cells change their responses with treatments, how they correspond to similar cell populations in animal models of autism, and what genes this particular population of cells expresses," adds Adolphs.

To validate their results, the researchers hope to identify and test additional subjects, which is a challenge because it is very hard to find people with autism who also have epilepsy and who have been implanted with electrodes in the amygdala for single-cell recordings, says Adolphs.

"At the same time, we should think about how to change the responses of these neurons, and see if those modifications correlate with behavioral changes," he says.

Funding for the research outlined in the Neuron paper, titled "Single-neuron correlates of abnormal face processing in autism," was provided by the Simons Foundation, the Gordon and Betty Moore Foundation, the Cedars-Sinai Medical Center, Autism Speaks, and the National Institute of Mental Health. Additional coauthors were Caltech postdoctoral scholar Oana Tudusciuc and graduate student Shuo Wang.

Katie Neith
Exclude from News Hub: 
News Type: 
Research News

SlipChip Counts Molecules with Chemistry and a Cell Phone

In developing nations, rural areas, and even one's own home, limited access to expensive equipment and trained medical professionals can impede the diagnosis and treatment of disease. Many qualitative tests that provide a simple "yes" or "no" answer (like an at-home pregnancy test) have been optimized for use in these resource-limited settings. But few quantitative tests—those able to measure the precise concentration of biomolecules, not just their presence or absence—can be done outside of a laboratory or clinical setting. By leveraging their discovery of the robustness of "digital," or single-molecule quantitative assays, researchers at the California Institute of Technology (Caltech) have demonstrated a method for using a lab-on-a-chip device and a cell phone to determine a concentration of molecules, such as HIV RNA molecules, in a sample. This digital approach can consistently provide accurate quantitative information despite changes in timing, temperature, and lighting conditions, a capability not previously possible using traditional measurements.

In a study published on November 7 in the journal Analytical Chemistry, researchers in the laboratory of Rustem Ismagilov, Ethel Wilson Bowles and Robert Bowles Professor of Chemistry and Chemical Engineering, used HIV as the context for testing the robustness of digital assays. In order to assess the progression of HIV and recommend appropriate therapies, doctors must know the concentration of HIV RNA viruses in a patient's bloodstream, called a viral load. The problem is that the viral load tests used in the United States, such as those that rely on amplification of RNA via polymerase chain reaction (PCR), require bulky and expensive equipment, trained personnel, and access to infrastructure such as electricity, all of which are often not available in resource-limited settings. Furthermore, because it is difficult to control the environment in these settings, viral load tests must be "robust," or resilient to changes such as temperature and humidity fluctuations.

Many traditional approaches for measuring viral load involve converting a small quantity of RNA into DNA, which is then multiplied through DNA amplification—allowing researchers to see how much DNA is present in real time after each round of amplification, by monitoring the varying intensity of a fluorescent dye marking the DNA. These experiments—known as "kinetic" assays—result in a readout reflecting changes in intensity over time, called an amplification curve. To find the original concentration of the beginning bulk RNA sample, the amplification curve is then compared with standard curves representing known concentrations of RNA. Since assays, such as those for HIV, require many rounds of DNA amplification to collect a sufficiently bright fluorescent signal, small errors introduced by changes in environmental conditions can compound exponentially—meaning that these kinetic measurements are not robust enough to withstand changing conditions.

In this new study, the researchers hypothesized that they could use a digital amplification approach to create a robust quantitative technique. In digital amplification, a sample is split into enough small volumes such that each well contains either a single target molecule or no molecule at all. Ismagilov and his colleagues used a microfluidic device they previously invented, called SlipChip, to compartmentalize single molecules from a sample containing HIV RNA. SlipChip is made up of two credit card-sized plates stacked atop one another; the sample is first added to the interconnected channels of the SlipChip, and with a single "slip" of the top chip, the channels turn into individual wells.

In lieu of PCR, the researchers used a different amplification chemistry on this chip called digital reverse transcription-loop-mediated amplification (dRT-LAMP), which produces a bright fluorescent signal in the presence of a target molecule during the amplification process. The dRT-LAMP technique eliminates the need for continuous tracking of the intensity of fluorescence; instead, just one end-point readout measurement is used. The resulting patchwork of "positive" or "negative" wells on the device, in combination with statistical analysis, enables single molecules to be counted.

"In each well, you are performing a qualitative experiment; the result is like a pregnancy test: either yes or no, positive or negative, for the presence of an HIV RNA molecule," says David Selck, a graduate student in Ismagilov's lab and a first author on the study. "But by doing a couple of thousand qualitative experiments, you end up getting a numerical, quantitative result: the concentration of HIV RNA molecules in the sample. By calculating the concentration from the number of wells that contain fluorescence—and therefore HIV—you're leveraging the robustness of many qualitative 'yes or no' experiments to fulfill the need for a quantitative, numerical result," he says.

When the researchers compared quantification results from dRT-LAMP to those obtained by the real-time, kinetic version of this chemistry, RT-LAMP, they found that the digital format provided accurate results despite changes in temperature and time, while the kinetic format could not. This finding adds to a body of research that the laboratory has been developing on the robustness of converting analog signals (i.e., a readout reflecting a changing concentration over time) into a series of positive or negative digital signals. Another recent paper, published in the Journal of the American Chemical Society, explored a variation on this analog-to-digital conversion.

Ismagilov's group also tested a way to take an image of the fluorescence pattern in the wells of the SlipChip and, from that image, determine the viral load—without the use of expensive microscopes or trained staff. They turned to a nearly ubiquitous 21st-century technology: the smartphone.

The researchers placed the SlipChip in a makeshift darkroom (a shoebox with a hole in the top) and then photographed its wells using a smartphone outfitted with a special filter attachment—so that the smartphone flash would be able to "excite" the fluorescent DNA dye, and the smartphone camera could capture an image of the fluorescence. The resulting images were uploaded to Microsoft SkyDrive, a cloud-based server, where custom software—designed by the researchers—determined the viral load concentration and sent the results back in an email. These capabilities allow the digital approach to perform reliably with automated processing, regardless of how poor the imaging conditions may be. As an example of its simplicity, a 5-year-old child was able to use this cell phone imaging method to obtain quantitative results using strands of RNA extracted from a noninfectious virus (a video of this demonstration is available on the Ismagilov lab's YouTube channel).

"We were surprised that this cell phone method worked, because both cell phone imaging and automated processing are error prone," Ismagilov says. "Because digital assays involve simply distinguishing positives from negatives, we found that even these error-prone approaches can be used to count single molecules reliably."

The fact that this method is robust not only to changes in time and temperature but also is amenable to cell phone imaging and automated processing makes it a promising technology for limited-resource settings. "We believe that our findings of the robustness of digital amplification could signal a major paradigm shift in how quantitative measurements are obtained at home, in the field, and in developing countries," Ismagilov says.

The researchers stress that there is still room for improvement, however. "While in this study we were examining robustness and used purified RNA, the next generation of devices will isolate HIV RNA molecules directly from patients' blood," says Bing Sun, a graduate student in Ismagilov's lab and a first author on the study. "We will also adapt the devices for other viruses, such as hepatitis C. By combining these improvements with the cell phone imaging method, we plan to create something that could actually be used in the real world," Sun adds.

The paper is titled "Increased Robustness of Single-Molecule Counting with Microfluidics, Digital Isothermal Amplification, and a Mobile Phone versus Real-Time Kinetic Measurements." In addition to Selck, Sun, and Ismagilov, the paper is coauthored by Mikhail A. Karymov, an associate scientist at Caltech. The work was funded by the Defense Advanced Research Projects Agency award number HR0011-11-2-0006, and by the National Institutes of Health award numbers R01EB012946 and 5DP1OD003584. Microfluid technologies developed by Ismagilov's group have been licensed to Emerald BioStructures, Randance Technologies, and SlipChip LLC.

Exclude from News Hub: 
News Type: 
Research News

From One Collapsing Star, Two Black Holes Form and Fuse

Black holes—massive objects in space with gravitational forces so strong that not even light can escape them—come in a variety of sizes. On the smaller end of the scale are the stellar-mass black holes that are formed during the deaths of stars. At the larger end are supermassive black holes, which contain up to one billion times the mass of our sun. Over billions of years, small black holes can slowly grow into the supermassive variety by taking on mass from their surroundings and also by merging with other black holes. But this slow process can't explain the problem of supermassive black holes existing in the early universe—such black holes would have formed less than one billion years after the Big Bang.

Now new findings by researchers at the California Institute of Technology (Caltech) may help to test a model that solves this problem.

Certain models of supermassive black hole growth invoke the presence of "seed" black holes that result from the deaths of very early stars. These seed black holes gain mass and increase in size by picking up the materials around them—a process called accretion—or by merging with other black holes. "But in these previous models, there was simply not enough time for any black hole to reach a supermassive scale so soon after the birth of the universe," says Christian Reisswig, NASA Einstein Postdoctoral Fellow in Astrophysics at Caltech and the lead author of the study. "The growth of black holes to supermassive scales in the young universe seems only possible if the 'seed' mass of the collapsing object was already sufficiently large," he says.

To investigate the origins of young supermassive black holes, Reisswig, in collaboration with Christian Ott, assistant professor of theoretical astrophysics, and their colleagues turned to a model involving supermassive stars. These giant, rather exotic stars are hypothesized to have existed for just a brief time in the early universe. Unlike ordinary stars, supermassive stars are stabilized against gravity mostly by their own photon radiation. In a very massive star, photon radiation—the outward flux of photons that is generated due to the star's very high interior temperatures—pushes gas from the star outward in opposition to the gravitational force that pulls the gas back in. When the two forces are equal, this balance is called hydrostatic equilibrium.

During its life, a supermassive star slowly cools due to energy loss through the emission of photon radiation. As the star cools, it becomes more compact, and its central density slowly increases. This process lasts for a couple of million years until the star has reached sufficient compactness for gravitational instability to set in and for the star to start collapsing gravitationally, Reisswig says.

Previous studies predicted that when supermassive stars collapse, they maintain a spherical shape that possibly becomes flattened due to rapid rotation. This shape is called an axisymmetric configuration. Incorporating the fact that very rapidly spinning stars are prone to tiny perturbations, Reisswig and his colleagues predicted that these perturbations could cause the stars to deviate into non-axisymmetric shapes during the collapse. Such initially tiny perturbations would grow rapidly, ultimately causing the gas inside the collapsing star to clump and to form high-density fragments.

These fragments would orbit the center of the star and become increasingly dense as they picked up matter during the collapse; they would also increase in temperature. And then, Reisswig says, "an interesting effect kicks in." At sufficiently high temperatures, there would be enough energy available to match up electrons and their antiparticles, or positrons, into what are known as electron-positron pairs. The creation of electron-positron pairs would cause a loss of pressure, further accelerating the collapse; as a result, the two orbiting fragments would ultimately become so dense that a black hole could form at each clump. The pair of black holes might then spiral around one another before merging to become one large black hole. "This is a new finding," Reisswig says. "Nobody has ever predicted that a single collapsing star could produce a pair of black holes that then merge."

Reisswig and his colleagues used supercomputers to simulate a supermassive star that is on the verge of collapse. The simulation was visualized with a video made by combining millions of points representing numerical data about density, gravitational fields, and other properties of the gases that make up the collapsing stars.

Although the study involved computer simulations and is thus purely theoretical, in practice, the formation and merger of pairs of black holes can give rise to tremendously powerful gravitational radiation—ripples in the fabric of space and time, traveling at the speed of light—that is likely to be visible at the edge of our universe, Reisswig says. Ground-based observatories such as the Laser Interferometer Gravitational-Wave Observatory (LIGO), comanaged by Caltech, are searching for signs of this gravitational radiation, which was first predicted by Albert Einstein in his general theory of relativity; future space-borne gravitational-wave observatories, Reisswig says, will be necessary to detect the types of gravitational waves that would confirm these recent findings.

Ott says that these findings will have important implications for cosmology. "The emitted gravitational-wave signal and its potential detection will inform researchers about the formation process of the first supermassive black holes in the still very young universe, and may settle some—and raise new—important questions on the history of our universe," he says.

These findings were published in Physical Review Letters the week of October 11 in a paper titled "Formation and Coalescence of Cosmological Supermassive-Black-Hole Binaries in Supermassive-Star Collapse." Caltech coauthors authors on the study include Ernazar Abdikamalov, Roland Haas, Philipp Mösta. Another coauthor on the study, Erik Schnetter, is at the Perimeter Institute for Theoretical Physics in Canada. The work was funded by the National Science Foundation, NASA, the Alfred P. Sloan Foundation, and the Sherman Fairchild Foundation.

Exclude from News Hub: 
News Type: 
Research News

Sky Survey Captures Key Details of Cosmic Explosions

Caltech astronomers report on unique results from the intermediate Palomar Transient Factory

Developed to help scientists learn more about the complex nature of celestial objects in the universe, astronomical surveys have been cataloguing the night sky since the beginning of the 20th century. The intermediate Palomar Transient Factory (iPTF)—led by the California Institute of Technology (Caltech)—started searching the skies for certain types of stars and related phenomena in February. Since its inception, iPTF has been extremely successful in the early discovery and rapid follow-up studies of transients—astronomical objects whose brightness changes over timescales ranging from hours to days—and two recent papers by iPTF astronomers describe first-time detections: one, the progenitor of a rare type of supernova in a nearby galaxy; the other, the afterglow of a gamma-ray burst in July.

The iPTF builds on the legacy of the Caltech-led Palomar Transient Factory (PTF), designed in 2008 to systematically chart the transient sky by using a robotic observing system mounted on the 48-inch Samuel Oschin Telescope on Palomar Mountain near San Diego, California. This state-of-the-art, robotic telescope scans the sky rapidly over a thousand square degrees each night to search for transients.

Supernovae—massive exploding stars at the end of their life span—make up one important type of transient. Since PTF's commissioning four years ago, its scorecard stands at over 2,000 spectroscopically classified supernovae. The unique feature of iPTF is brand-new technology that is geared toward fully automated, rapid response and follow-up within hours of discovery of a new supernova.

The first paper, "Discovery, Progenitor and Early Evolution of a Stripped Envelope Supernova iPTF13bvn," appears in the September 20 issue of Astrophysical Journal Letters and describes the detection of a so-called Type Ib supernova. Type Ib supernovae are rare explosions where the progenitor star lacks an outer layer of hydrogen, the most abundant element in the universe, hence the "stripped envelope" moniker. It has proven difficult to pin down which kinds of stars give rise to Type Ib supernovae. One of the most promising ideas, says graduate student and lead author Yi Cao, is that they originate from Wolf-Rayet stars. These objects are 10 times more massive and thousands of times brighter than the sun and have lost their hydrogen envelope by means of very strong stellar winds. Until recently, no solid evidence existed to support this theory. Cao and colleagues believe that a young supernova that they discovered, iPTF13bvn, occurred at a location formerly occupied by a likely Wolf-Rayet star.

Supernova iPTF13bvn was spotted on June 16, less than a day after the onset of its explosion. With the aid of the adaptive optics system used by the 10-meter Keck telescopes in Hawaii—which reduces the blurring effects of Earth's atmosphere—the team obtained a high-resolution image of this supernova to determine its precise position. Then they compared the Keck image to a series of pictures of the same galaxy (NGC 5806) taken by the Hubble Space Telescope in 2005, and found one starlike source spatially coincident to the supernova. Its intrinsic brightness, color, and size—as well as its mass-loss history, inferred from supernova radio emissions—were characteristic of a Wolf-Rayet star.

"All evidence is consistent with the theoretical expectation that the progenitor of this Type Ib supernova is a Wolf-Rayet star," says Cao. "Our next step is to check for the disappearance of this progenitor star after the supernova fades away. We expect that it will have been destroyed in the supernova explosion."

Though Wolf-Rayet progenitors have long been predicted for Type Ib supernova, the new work represents the first time researchers have been able to fill the gap between theory and observation, according to study coauthor and Caltech alumna Mansi Kasliwal (PhD '11). "This is a big step in our understanding of the evolution of massive stars and their relation to supernovae," she says.

The second paper, "Discovery and Redshift of an Optical Afterglow in 71 degrees squared: iPTF13bxl and GRB 130702A," appears in the October 20 issue of Astrophysical Journal Letters. Lead author Leo Singer, a Caltech grad student, describes finding and characterizing the afterglow of a long gamma-ray burst (GRB) as being similar to digging a needle out of a haystack.

Long GRBs, which are the brightest known electromagnetic events in the universe, are also connected with the deaths of rapidly spinning, massive stars. Although such GRBs initially are detected by their high-energy radiation—GRB 130702A, for example, was first located by NASA's Fermi Gamma-ray Space Telescope—an X-ray or visible-light afterglow must also be found to narrow down a GRB's position enough so that its location can be pinpointed to one particular galaxy and to determine if it is associated with a supernova.

After Fermi's initial detection of GRB 130702A, iPTF was able to narrow down the GRB's location by scanning an area of the sky over 360 times larger than the face of the moon and sifting through hundreds of images using sophisticated machine-learning software; it also revealed the visible-light counterpart of the burst, designated iPTF13bxl. This is the first time that a GRB's position has been determined precisely using optical telescopes alone.

After making the initial correlation between the GRB and the afterglow, Singer and colleagues corroborated their results and gained additional information using a host of other instruments, including optical, X-ray, and radio telescopes. In addition, ground-based telescopes around the world monitored the afterglow for days as it faded away, and recorded the emergence of a supernova five days later.

According to Singer, GRB130702A / iPTF13bxl turned out to be special in many ways.

"First, by measuring its redshift, we learned that it was pretty nearby as far as GRBs go," he says. "It was pretty wimpy compared to most GRBs, liberating only about a thousandth as much energy as the most energetic ones. But we did see it eventually turn into a supernova. Typically we only detect supernovae in connection with nearby, subluminous GRBs, so we can't be certain that cosmologically distant GRBs are caused by the same kinds of explosions."

"The first results from iPTF bode well for the discovery of many more supernovae in their infancy and many more afterglows from the Fermi satellite", says Shrinivas Kulkarni, the John D. and Catherine T. MacArthur Professor of Astronomy and Planetary Science at Caltech and principal investigator for both the PTF and iPTF.

The iPTF project is a scientific collaboration between Caltech; Los Alamos National Laboratory; the University of Wisconsin, Milwaukee; the Oskar Klein Centre in Sweden; the Weizmann Institute of Science in Israel; the TANGO Program of the University System of Taiwan; and the Kavli Institute for the Physics and Mathematics of the Universe in Japan.


Katie Neith
Exclude from News Hub: 
News Type: 
Research News

Look Out Above! Experiment Explores Innate Visual Behavior in Mice

When you're a tiny mouse in the wild, spotting aerial predators—like hawks and owls—is essential to your survival. But once you see an owl, how is this visual cue processed into a behavior that helps you to avoid an attack? Using an experimental video technique, researchers at the California Institute of Technology (Caltech) have now developed a simple new stimulus with which they can spur the mouse's escape plans. This new stimulus allows the researchers to narrow down cell types in the retina that could aid in the detection of aerial predators.

"The mouse has recently become a very popular model for the study of vision," says biology graduate student Melis Yilmaz, who is also first author of the study, which will be published online in the journal Current Biology on October 10. "Our lab and other labs have done a lot of physiological, anatomical, and histological studies in the mouse retina"—a layer of light-sensitive cells in the eye that relay image information to the brain—"but the missing piece was mouse behavior: What do mice do with their vision?"

Yilmaz, under the supervision of Markus Meister, Lawrence A. Hanson, Jr. Professor of Biology, studied the behavior of 40 mice, placed one-by-one in a tiny room called a behavioral arena. After placing each mouse alone in the arena and letting it explore the new environment for a few minutes, Yilmaz played videos of different visual stimuli on a computer monitor mounted on the ceiling, the screen facing down onto the arena. The researchers then watched a video feed of the mouse's behavior, obtained with a camera located on one of the walls of the arena.

Surprisingly, all of the mice responded to one specific visual stimulus: an expanding black disk, which is meant to imitate the appearance of an approaching aerial predator.

A quarter of the mice responded to the looming disk by completely freezing in place, not moving a muscle or twitching a whisker or tail until the disk disappeared. "When I first saw this behavior, my first thought was that the video recording had stopped," Yilmaz says.

Example of mouse "freezing" upon viewing the looming disk stimulus above.

Example of mouse "freezing" upon viewing the looming disk stimulus above.
Example of mouse "freezing" upon viewing the looming disk stimulus above. Results of this study published in the paper "Rapid Innate Defensive Responses of Mice to Looming Visual Stimuli" on October 10, 2013 in Current Biology.
Credit: Melis Yilmaz and Markus Meister/California Institute of Technology


A far more common reaction to the looming disk—seen in around 75 percent of the mice—was to flee for the cover of a tent-like nest in one corner of the arena.

Example of mouse fleeing upon viewing the looming disk stimulus above.

Example of mouse fleeing upon viewing the looming disk stimulus above.
Example of mouse fleeing upon viewing the looming disk stimulus above. Results of this study published in the paper "Rapid Innate Defensive Responses of Mice to Looming Visual Stimuli" on October 10, 2013 in Current Biology.
Credit: Melis Yilmaz and Markus Meister/California Institute of Technology


"For each mouse, this was the very first time that the animal was put into this arena, and it was the very first time that it saw that stimulus, and yet it has this sort of immediate reflex-like response…beginning to flee in less than a quarter of a second," Meister says. "What's attractive about this behavior is that it's incredibly robust, so we can rely on it, and it's quite specific to this particular visual stimulus. If the same disk was presented on a monitor at the bottom of the arena, the animals don't respond to that at all. And a looming white disk is also much less effective," he adds.

Although their study wasn't designed to evaluate the purpose of the two responses, Yilmaz and Meister suspect that, in the wild, different environmental conditions could lead to different visual behaviors.

"If you were out in nature, maybe freezing is a good reaction to a predatory bird that is very far away because it would allow you to blend into the surroundings," Meister says. This would confound the bird's visual system, which uses movement to track targets. Furthermore, he adds, "If the bird is within hearing distance, freezing so completely would help it avoid making a rustling noise."

The behaviors these researchers observed in this experiment are not uncommon among other animals in the wild, as Meister discovered one evening after giving a presentation about the fleeing and freezing results. "When I came home that evening, my son said, 'Papi, you won't believe what happened when we were at the park today. This squirrel was running across a wall, and suddenly it just froze! And then some guy yelled, 'Hey look!' and there was a hawk circling around.' So he had just that day seen it in real life," Meister says.

Freezing might be the best game plan for an animal trying to avoid predators that are far away, but, Meister says, when the threat is closer "and there is a protective place nearby, then escape might be a better strategy."

When Yilmaz and Meister began connecting these specific behavioral observations with other information about the mouse visual system, they were able to make predictions about the types of neurons and circuits involved in this rapid response. "We tested four different speeds of the expanding disk video, and we found that only one of those speeds caused this behavior robustly," Meister says. "That also gives us clues about what types of cells in the retina might be involved, because we know that one type responds to high-speed motion and one type responds to low-speed motions. The cells that detect low-speed motion are probably not involved in this behavior."

"It's really striking to me to watch the animal completely ignore one stimulus—like an expanding white disk—whereas they have such a robust reaction to the other type of stimulus," Yilmaz says. Her next experiments will be focused on manipulating these candidate cell types to pinpoint exactly which types of neurons and circuits are involved in this visual behavior.

In addition to its specific implications for visual behaviors, the work also helps to validate the mouse model for the study of visual processing, Meister says. Mice used in research have been bred for dozens of generations in laboratories—where they never would have seen an aerial predator—and yet the instinctual behavior still exists. "Lab mice never had to learn that a dark object from above was bad news," he says. "In fact, in our experiments, there was never any kind of punishment or ill effect from a visual display, so they didn't have any chance to learn the meaning. We believe it's kind of built into their genetic constitution."

Although humans don't have to escape the threat of predatory birds, Meister says that the results from this research could eventually provide information about human visual behaviors. "The mouse and human retinas are really very similar, so many of the circuits that are important for the mouse have analogous circuits in the human retina," he says. "Humans also react instinctively to approaching objects, but, obviously, we don't freeze. So, how did nature change a circuit that helps one animal escape from predators so that it serves a different function in another animal?"

This work was published in a paper titled "Rapid Innate Defensive Responses of Mice to Looming Visual Stimuli." The research was funded by the National Institutes of Health.

Exclude from News Hub: 
News Type: 
Research News

Scientists Find a Martian Igneous Rock that is Surprisingly Earth-like

During the nearly 14 months that it has spent on the red planet, Curiosity, the Mars Science Laboratory (MSL) rover, has scooped soil, drilled rocks, and analyzed samples by exposing them to laser beams, X-rays, and alpha particles using the most sophisticated suite of scientific instruments ever deployed on another planet. One result of this effort was evidence reported last March that ancient Mars could have supported microbial life.

But Curiosity is far more than a one-trick rover, and in a paper published today in the journal Science, a team of MSL scientists reports its analysis of a surprisingly Earth-like martian rock that offers new insight into the history of Mars's interior and suggests parts of the red planet may be more like our own than we ever knew.  

The paper—whose lead author is Edward Stolper, Caltech's William E. Leonhard Professor of Geology, provost, and interim president—is one of five appearing in the journal with results from the analysis of data and observations obtained during Curiosity's first 100 martian days (sols). The other papers include an evaluation of fine- and coarse-grained soil samples and detailed analyses of the composition and formation process of a windblown drift of sand and dust.

"The results presented go beyond the question of habitability," says John Grotzinger, MSL project scientist and Caltech's Fletcher Jones Professor of Geology. "Mars Science Laboratory also has a major mission objective to explore and characterize the geological environment at all scales and also the atmosphere. In doing this we learn about the fundamental physical and chemical properties that distinguish the terrestrial planets from each other and also what they share in common."

The paper by Stolper and his colleagues—including Caltech senior research scientist Michael Baker and graduate student Megan Newcombe—examines in detail a 50-centimeter-tall pyramid-shaped rock named "Jake_M" (after MSL surface operations systems chief engineer Jacob "Jake" Matijevic, who passed away two weeks after Curiosity's landing).

The rock was encountered by Curiosity a few weeks after it landed, during its slow drive across Gale Crater on the way toward the crater's central peak, Mount Sharp. Visual inspection of the dark gray rock suggested that it was probably a fine-grained basaltic igneous rock formed by the crystallization of magma near the planet's surface. The absence of obvious mineral grains on its essentially dust-free surface further suggested that it would have a relatively uniform (i.e., homogeneous) chemical composition.

For that reason, MSL's scientists decided it would be a good test case for comparing the results obtained by two of the rover's scientific instruments, the Alpha Particle X-ray Spectrometer (APXS) and ChemCam, both of which are used to measure the chemical compositions of rocks, sediments, and minerals.

The APXS analyses, however, produced some unanticipated results. Far from being similar in its chemical composition to the many martian igneous rocks analyzed by the Spirit and Opportunity rovers on the surface of Mars or to martian meteorites found on Earth, Jake_M is highly enriched in sodium and potassium, making it chemically alkaline.

Although Jake_M is very different from known martian rocks, Stolper and colleagues realized that it is very similar in its chemical composition to a relatively rare type of terrestrial igneous rock, known as a mugearite, which is typically found on ocean islands and in continental rift zones.

"We realized right away that although nothing like it had ever been found on Mars, Jake_M is similar in composition to terrestrial mugearites, which although uncommon are very well known to igneous petrologists who study volcanic rocks on Earth," Stolper says. "In fact, if this rock were found on Earth, we would be hard pressed, based on its elemental composition, to tell it was not an Earth rock." However, he notes, "such rocks are so uncommon on Earth that it would be highly unlikely that, if you landed a spacecraft on Earth in a random location, the first rock you encountered within a few hundred meters of your landing site would be an alkaline rock like Jake_M."

On both Earth and Mars, basaltic liquids form by partial melting of rocks deep inside the planet. By analogy with terrestrial mugearites, Jake_M probably evolved from such a partial melt that cooled as it ascended toward the surface from the martian interior; as it cooled, crystals formed, and the chemical composition of the remaining liquid changed (just as, in the making of rock candy, a sugar-water solution becomes less sweet as it cools and sugar crystallizes from it).

"The minerals that crystallize have different elemental compositions than the melt and are either more dense or less dense than the liquid and thus tend to physically separate, that is, to settle to the bottom of the magma chamber or float to the top, causing the chemical composition of the remaining liquid to change," Baker explains.  

The MSL team then modeled the conditions required to produce a residual liquid similar in composition to Jake_M by crystallization of plausible partial melts. From those results, they inferred that the cooling and crystallization that eventually produced Jake_M probably occurred at pressures of several kilobars, the equivalent of the pressure at a depth of a few tens of kilometers beneath the martian surface. The modeling also suggested—particularly by analogy with terrestrial mugearites—that the martian magmas were relatively rich in dissolved water.

According to Stolper, Baker, and their colleagues, Jake_M probably originated via the melting of a relatively alkali- and water-rich martian mantle that was different from the sources of other known martian basalts. Because the primitive martian mantle is believed to have been as much as two times richer in sodium and potassium than Earth's mantle, the researchers say that, in hindsight, it might not be surprising if alkaline magmas, which are so uncommon on Earth, are more common on Mars.

Moreover, Stolper adds, "there are many hypotheses for origin of alkaline magmas on Earth that are similar to Jake_M. Perhaps the most plausible is that regions deep in the mantle become enriched in alkalis by a process known as metasomatism, in which the chemical compositions of rocks are altered by the flow of water- and carbon-dioxide-rich fluids. The existence of Jake_M may be evidence that such processes also occur in the interior of Mars."

Intriguingly, the potassium-rich nature of many of the sedimentary rocks that have been analyzed by the MSL mission may turn out to reflect the presence of such a region enriched in alkalis in the mantle underlying Gale Crater.

However, he says, "with only one rock having this odd chemical composition, we don't want to get carried away. Is it a one-off, or is it a representative of an important class of igneous rocks from the Gale Crater region? Determining the answer to this will be an important goal for the ongoing MSL mission."

"The paper by Stolper et al. shows that the internal composition of Mars is more similar to Earth than we had thought and illustrates how even a single rock can provide insight into the evolution of the planet as a whole," Grotzinger says.

The work in the paper, "The Petrochemistry of Jake_M: A Martian Mugearite," was supported by grants from the National Science Foundation, the National Aeronautics and Space Administration, the Canadian Space Agency, and the Centre National d'Études Spatiales.

Exclude from News Hub: 
News Type: 
Research News

Spirals of Light May Lead to Better Electronics

A group of researchers at the California Institute of Technology (Caltech) has created the optical equivalent of a tuning fork—a device that can help steady the electrical currents needed to power high-end electronics and stabilize the signals of high-quality lasers. The work marks the first time that such a device has been miniaturized to fit on a chip and may pave the way to improvements in high-speed communications, navigation, and remote sensing.

"When you're tuning a piano, a tuning fork gives a standardized pitch, or reference sound frequency; in optical resonators the 'pitch' corresponds to the color, or wavelength, of the light. Our device provides a consistent light frequency that improves both optical and electronic devices when it is used as a reference," says Kerry Vahala, Ted and Ginger Jenkins Professor of Information Science and Technology and Applied Physics. Vahala is also executive officer for applied physics and materials science and an author on the study describing this new work, published in the journal Nature Communications.

A good tuning fork controls the release of its acoustical energy, ringing just one pitch at a particular sound frequency for a long time; this sustaining property is called the quality factor. Vahala and his colleagues transferred this concept to their optical resonator, focusing on the optical quality factor and other elements that affect frequency stability.

The researchers were able to stabilize the light's frequency by developing a silica glass chip resonator with a specially designed path for the photons in the shape of what is called an Archimedean spiral. "Using this shape allows the longest path in the smallest area on a chip. We knew that if we made the photons travel a longer path, the whole device would become more stable," says Hansuek Lee, a senior researcher in Vahala's lab and lead author on the paper.

Frequency instability stems from energy surges within the optical resonator—which are unavoidable due to the laws of thermodynamics. Because the new resonator has a longer path, the energy changes are diluted, so the power surges are dampened—greatly improving the consistency and quality of the resonator's reference signal, which, in turn, improves the quality of the electronic or optical device.

In the new design, photons are applied to an outer ring of the spiraled resonator with a tiny light-dispensing optic fiber; the photons subsequently travel around four interwoven Archimedean spirals, ultimately closing the path after traveling more than a meter in an area about the size of a quarter—a journey 100 times longer than achieved in previous designs. In combination with the resonator, a special guide for the light was used, losing 100 times less energy than the average chip-based device.

In addition to its use as a frequency reference for lasers, a reference cavity could one day play a role equivalent to that of the ubiquitous quartz crystal in electronics. Most electronics systems use a device called an oscillator to provide power at very precise frequencies. In the past several years, optical-based oscillators—which require optical reference cavities—have become better than electronic oscillators at delivering stable microwave and radio frequencies. While these optical oscillators are currently too large for use in small electronics, there is an effort under way to miniaturize their key subcomponents—like Vahala's chip-based reference cavity.

"A miniaturized optical oscillator will represent a shift in the traditional roles of photonics and electronics. Currently, electronics perform signal processing while photonics rule in transporting information from one place to another over fiber-optic cable. Eventually, oscillators in high-performance electronics systems, while outwardly appearing to be electronic devices, will internally be purely optical," Vahala says.

"The technology that Kerry and his group have introduced opens a new avenue to move precision optical frequency sources out of the lab and onto a compact, robust and integrable silicon-based platform," says Scott Diddams, physicist and project leader at the National Institute of Standards and Technology, recent Moore Distinguished Scholar at Caltech and a coauthor on the study. "It opens up many new and unexplored options for building systems that could have greater impact to 'real-world' applications," Diddams says.

The paper, titled "Spiral resonators for on-chip laser frequency stabilization," was published online in Nature Communications on September 17. Other Caltech coauthors on the study include graduate students Myoung Gyun Suh and Tong Chen (PhD '13), and postdoctoral scholar Jiang Li (PhD '13). The project was in collaboration with Caltech startup company hQphotonics. This work was funded by the Defense Advanced Research Projects Agency; the Caltech's Kavli Nanoscience Institute; and the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center with support of the Gordon and Betty Moore Foundation.

Exclude from News Hub: 
News Type: 
Research News

New Gut Bacterium Discovered in Termite's Digestion of Wood

Caltech researchers find new species of microbe responsible for acetogenesis, an important process in termite nutrition.

When termites munch on wood, the small bits are delivered to feed a community of unique microbes living in their guts, and in a complex process involving multiple steps, these microbes turn the hard, fibrous material into a nutritious meal for the termite host. One key step uses hydrogen to convert carbon dioxide into organic carbon—a process called acetogenesis—but little is known about which gut bacteria play specific roles in the process. Utilizing a variety of experimental techniques, researchers from the California Institute of Technology (Caltech) have now discovered a previously unidentified bacterium—living on the surface of a larger microorganism in the termite gut—that may be responsible for most gut acetogenesis.

"In the termite gut, you have several hundred different species of microbes that live within a millimeter of one another. We know certain microbes are present in the gut, and we know microbes are responsible for certain functions, but until now, we didn't have a good way of knowing which microbes are doing what," says Jared Leadbetter, professor of environmental microbiology at Caltech, in whose laboratory much of the research was performed. He is also an author of a paper about the work published the week of September 16 in the online issue of the Proceedings of the National Academy of Sciences (PNAS).

Acetogenesis is the production of acetate (a source of nutrition for termites) from the carbon dioxide and hydrogen generated by gut protozoa as they break down decaying wood. In their study of "who is doing what and where," Leadbetter and his colleagues searched the entire pool of termite gut microbes to identify specific genes from organisms responsible for acetogenesis.

The researchers began by sifting through the microbes' RNA—genetic information that can provide a snapshot of the genes active at a certain point in time. Using RNA from the total pool of termite gut microbes, they searched for actively transcribed formate dehydrogenase (FDH) genes, known to encode a protein necessary for acetogenesis. Next, using a method called multiplex microfluidic digital polymerase chain reaction (digital PCR), the researchers sequestered the previously unstudied individual microbes into tiny compartments to identify the actual microbial species carrying each of the FDH genes. Some of the FDH genes were found in types of bacteria known as spirochetes—a previously predicted source of acetogenesis. Yet it appeared that these spirochetes alone could not account for all of the acetate produced in the termite gut.

Initially, the Caltech researchers were unable to identify the microorganism expressing the single most active FDH gene in the gut. However, the first authors on the study, Adam Rosenthal, a postdoctoral scholar in biology at Caltech, and Xinning Zhang (PhD '10, Environmental Science and Engineering), noticed that this gene was more abundant in the portion of the gut extract containing wood chunks and larger microbes, like protozoans. After analyzing the chunkier gut extract, they discovered that the single most active FDH gene was encoded by a previously unstudied species from a group of microbes known as the deltaproteobacteria. This was the first evidence that a substantial amount of acetate in the gut may be produced by a non-spirochete.

Because the genes from this deltaproteobacterium were found in the chunky particulate matter of the termite gut, the researchers thought that perhaps the newly identified microbe attaches to the surface of one of the chunks. To test this hypothesis, the researchers used a color-coded visualization method called hybridization chain reaction-fluorescent in situ hybridization, or HCR-FISH.

The technique—developed in the laboratory of Niles Pierce, professor of applied and computational mathematics and bioengineering at Caltech, and a coauthor on the PNAS study—allowed the researchers to simultaneously "paint" cells expressing both the active FDH gene and a gene identifying the deltoproteobacterium with different fluorescent colors simultaneously. "The microfluidics experiment suggested that the two colors should be expressed in the same location and in the same tiny cell," Leadbetter says. And, indeed, they were. "Through this approach, we were able to actually see where the new deltaproteobacterium resided. As it turns out, the cells live on the surface of a very particular hydrogen-producing protozoan."

This association between the two organisms makes sense based on what is known about the complex food web of the termite gut, Leadbetter says. "Here you have a large eukaryotic single cell—a protozoan—which is making hydrogen as it degrades wood, and you have these much smaller hydrogen-consuming deltaproteobacteria attached to its surface," he says. "So, this new acetogenic bacterium is snuggled up to its source of hydrogen just as close as it can get."

This intimate relationship, Leadbetter says, might never have been discovered relying on phylogenetic inference—the standard method for matching a function to a specific organism. "Using phylogenetic inference, we say, 'We know a lot about this hypothetical organism's relatives, so without ever seeing the organism, we're going to make guesses about who it is related to," he says. "But with the techniques in this study, we found that our initial prediction was wrong. Importantly, we have been able to determine the specific organism responsible and a location of the mystery organism, both of which appear to be extremely important in the consumption of hydrogen and turning it into a product the insect can use." These results not only identify a new source for acetogenesis in the termite gut—they also reveal the limitations of making predictions based exclusively on phylogenetic relationships.

Other Caltech coauthors on the paper titled "Localizing transcripts to single cells suggests an important role of uncultured deltaproteobacteria in the termite gut hydrogen economy," are graduate student Kaitlyn S. Lucey (environmental science and engineering), Elizabeth A. Ottesen (PhD '08, biology), graduate student Vikas Trivedi (bioengineering), and research scientist Harry M. T. Choi (PhD '10, bioengineering). This work was funded by the U.S. Department of Energy, the National Science Foundation, the National Institutes of Health, the Programmable Molecular Technology Center within the Beckman Institute at Caltech, a Donna and Benjamin M. Rosen Center Bioengineering scholarship, and the Center for Environmental Microbial Interactions at Caltech.

Exclude from News Hub: 
News Type: 
Research News

What Causes Some to Participate in Bubble Markets?

Caltech research shows neural underpinnings of financially risky behavior

During financial bubbles, such as the one that centered around the U.S. housing market and triggered the Great Recession, some investors react differently than others. Some rush in, trying to "time" the market's rise and fall, while others play it safe and bow out. Ever wonder what accounts for such differences? New neuroeconomic research at the California Institute of Technology (Caltech) has found that the investors most likely to take a risk and fuel bubble markets are those with good "theory of mind" skills—those who are good at "putting themselves in others' shoes." They think the most about the motives behind prices and what other people in the market are likely to do next, but during bubble markets, that actually becomes risky behavior.

The finding is contrary to what some economists have suggested—that financial bubbles are driven by confusion or denial on the part of investors and traders.

"What we find is that the people who are most susceptible to bubbles are not just reckless traders getting caught up in a frenzy," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at Caltech. "Instead, when there are unusual patterns in trading activity, these people are actually thinking a lot about what it means, and they're deciding to jump in."

Camerer is one of the principal investigators on a new paper describing the study and its results in the September 16 issue of the journal Neuron. The study was led by Benedetto De Martino, senior research fellow at Royal Holloway, University of London, while he was a postdoctoral scholar at Caltech.

An important message from the study, De Martino says, is that it shows "when we interact with complex modern institutions, like financial markets, the same neural computational mechanisms that have been extremely advantageous in our evolutionary history can turn against us, biasing our choices with potentially catastrophic effects." Indeed, theory of mind is typically considered a beneficial skill that can help an individual navigate everything from everyday social situations to emergency scenarios.

The findings center around two regions of the brain. One, called the ventromedial prefrontal cortex (vmPFC), can be thought of as "the brain's accountant" because it encodes value. The other, the dorsomedial prefrontal cortex (dmPFC), is strongly associated with theory of mind.

In the study, the researchers used functional magnetic resonance imaging (fMRI) to monitor blood flow in the brains of student participants as they interacted with replayed financial market experiments. Such blood flow is considered a proxy for brain activity. Each participant was given $60 and then served as an outside observer of a series of six trading sessions involving other traders; each trading session lasted 15 periods, and after each period the dividend for the traded asset decreased by $0.24. At various points during the trial, the students were asked to imagine that they were traders and to decide whether they would want to stick with their current holdings or buy or sell shares at the going price.

In half of the sessions, trading resulted in a bubble market in which the prices ended up significantly higher than the actual, or fundamental, value of the asset being traded. In the three other sessions, prices tracked fairly well with the fundamental value, and never exceeded it.

The researchers found that the formation of bubbles was linked to increased activity in the vmPFC, that "accounting" part of the brain that processes value judgments.

Next, they investigated the question whether the people who were more susceptible to participating in, or "riding," bubbles showed heightened activity in the same brain region. The answer? Yes—those who were willing to participate in the bubble market again displayed more activity in the vmPFC.

To further investigate the theory of mind connection, the researchers asked participants to take the well-known "mind in the eyes" test. The test challenges test takers to choose the word that best describes what various people are thinking or feeling, based solely on pictures of their eyes. The researchers found that study participants who scored highest on the test, and thus discerned the correct feelings most accurately, also showed stronger links between their portfolio values and activity in the dmPFC, one of the brain regions linked to theory of mind activity.

"The way we interpret this is that these people were thinking more about what was going on in the market and wondering why people were behaving the way they were," Camerer explains. "Normally, in everyday social encounters and in specialized professions, this kind of mind reading is useful to the individual. But in these markets, when prices are going crazy, these people think, 'Wow, I think I can figure these markets out. Let me buy and sell.' And that is usually going to contribute to the bubble's momentum and also cost them money."

One of the most innovative parts of the study involved using a new mathematical formula for detecting unusual activity in the trading market. Unlike normal markets in which the mathematical distribution of the arrival of "orders" (offers to buy or sell shares) follows a somewhat steady pattern, bubble markets display restlessness—with flurries of activity followed by lulls. The researchers looked to see if any brain regions showed signs of tracking this unusual distribution of orders during bubble markets. And they found a strong association with the dmPFC and vmPFC. Heightened activity in these prefrontal regions, the team suspects, is a sign that participants are more likely to ride the bubble market, perhaps because they subconsciously believe that there are insiders with extra information operating within the market.

Another of the paper's senior authors, Peter Bossaerts, completed the work at Caltech and is now at the University of Utah. He explains: "It's group illusion. When participants see the inconsistency in order flow, they think that there are people who know better in the marketplace and they make a game out of it. In reality, however, there is nothing to be gained because nobody knows better."

The research could eventually help in the design of better social and financial interventions to avoid the formation of bubbles in financial markets, as well as methods for individual traders and brokers to manage their trading better.

The Neuron paper is titled "In the Mind of the Market: Theory of Mind Biases Value Computation During Financial Bubbles." Along with Camerer, De Martino, and Bossaerts, additional Caltech coauthors are John O'Doherty, professor of psychology, and Debajyoti Ray, a graduate student in Computation and Neural Systems. The work was supported by a Sir Henry Wellcome Postdoctoral Fellowship, the Betty and Gordon Moore Foundation, and the Lipper Family Foundation.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News