A Detailed Look at HIV in Action

Researchers gain a better understanding of the virus through electron microscopy

The human intestinal tract, or gut, is best known for its role in digestion. But this collection of organs also plays a prominent role in the immune system. In fact, it is one of the first parts of the body that is attacked in the early stages of an HIV infection. Knowing how the virus infects cells and accumulates in this area is critical to developing new therapies for the over 33 million people worldwide living with HIV. Researchers at the California Institute of Technology (Caltech) are the first to have utilized high-resolution electron microscopy to look at HIV infection within the actual tissue of an infected organism, providing perhaps the most detailed characterization yet of HIV infection in the gut.

The team's findings are described in the January 30 issue of PLOS Pathogens.

"Looking at a real infection within real tissue is a big advance," says Mark Ladinsky, an electron microscope scientist at Caltech and lead author of the paper. "With something like HIV, it's usually very difficult and dangerous to do because the virus is an infectious agent. We used an animal model implanted with human tissue so we can study the actual virus under, essentially, its normal circumstances."

Ladinsky worked with Pamela Bjorkman, Max Delbrück Professor of Biology at Caltech, to take three-dimensional images of normal cells along with HIV-infected tissues from the gut of a mouse model engineered to have a human immune system. The team used a technique called electron tomography, in which a tissue sample is embedded in plastic and placed under a high-powered microscope. Then the sample is tilted incrementally through a course of 120 degrees, and pictures are taken of it at one-degree intervals. All of the images are then very carefully aligned with one another and, through a process called back projection, turned into a 3-D reconstruction that allows different places within the volume to be viewed one pixel at a time.

"Most prior electron microscopy studies of HIV have focused on the virus itself or on infection of laboratory-grown cell cultures," says Bjorkman, who is also an investigator with the Howard Hughes Medical Institute. "Ours is the first major electron microscopy study to look at HIV interacting with other cells in the actual gut tissue of an infected animal model."

By procuring such detailed images, Ladinsky and Bjorkman were able to confirm several observations of HIV made in prior, in vitro studies, including the structure and behavior of the virus as it buds off of infected cells and moves into the surrounding tissue and structural details of HIV budding from cells within an infected tissue. The team also described several novel observations, including the existence of "pools" of HIV in between cells, evidence that HIV can infect new cells both by direct contact or by free viruses in the same tissue, and that pools of HIV can be found deep in the gut.

"The study suggests that an infected cell releases newly formed viruses in a semisynchronous wave pattern," explains Ladinsky. "It doesn't look like one virus buds off and then another in a random way. Rather, it appears that groups of virus bud off from a given cell within a certain time frame and then, a little while later, another group does the same, and then another, and so on."

The team came to this conclusion by identifying single infected cells using electron microscopy. Then they looked for HIV particles at different distances from the original cell and saw that the groups of particles were more mature as their distance from the infected cell increased.

"This finding showed that indeed these cells were producing waves of virus rather than individual ones, which was a neat observation," says Ladinsky.

In addition to producing waves of virus, infected cells are also thought to spread HIV through direct contact with their neighbors. Bjorkman and Ladinsky were able to visualize this phenomenon, known as a virological synapse, using electron microscopy.

"We were able to see one cell producing a viral bud that is contacting the cell next to it, suggesting that it's about to infect directly," Ladinsky says. "The space between those two cells represents the virological synapse."

Finally, the team found pools of HIV accumulating between cells where there was no indication of a virological synapse. This suggested that a virological synapse, which may be protected from some of the body's immune defenses, is not the only way in which HIV can infect new cells. The finding of HIV transfer via free pools of free virus offers hope that treatment with protein-based drugs, such as antibodies, could be an effective means of augmenting or replacing current treatment regimens that use small-molecule antiretroviral drugs.

"We saw these pools of virus in places where we had not initially expected to see them, down deep in the intestine," he explains. "Most of the immune cells in the gut are found higher up, so finding large amounts of the virus in the crypt regions was surprising."

The team will continue their efforts to look at HIV and related viruses under natural conditions using additional animal models, and potentially people.

"The end goal is to look at a native infection in human tissue to get a real picture of how it's working inside the body, and hopefully make a positive difference in fighting this epidemic," says Bjorkman.

Additional authors on the PLOS Pathogens paper, "Electron Tomography of HIV-1 Infection in Gut-Associated Lymphoid Tissue," are Collin Kieffer, a postdoctoral scholar in biology at Caltech; Gregory Olson and Douglas S. Kwon from the Ragon Institute of Massachusetts General Hospital (MGH), MIT, and Harvard; and Maud Deruaz, Vladimir Vrbanac, and Andrew M. Tager from MGH and Harvard Medical School. The work was supported by the Center for the Structural Biology of Cellular Host Elements in Egress, Trafficking and Assembly of HIV (CHEETAH).

 

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Worry on the Brain

Caltech researchers pinpoint neural circuitry that promotes stress-induced anxiety

According to the National Institute of Mental Health, over 18 percent of American adults suffer from anxiety disorders, characterized as excessive worry or tension that often leads to other physical symptoms. Previous studies of anxiety in the brain have focused on the amygdala, an area known to play a role in fear. But a team of researchers led by biologists at the California Institute of Technology (Caltech) had a hunch that understanding a different brain area, the lateral septum (LS), could provide more clues into how the brain processes anxiety. Their instincts paid off—using mouse models, the team has found a neural circuit that connects the LS with other brain structures in a manner that directly influences anxiety.

"Our study has identified a new neural circuit that plays a causal role in promoting anxiety states," says David Anderson, the Seymour Benzer Professor of Biology at Caltech, and corresponding author of the study. "Part of the reason we lack more effective and specific drugs for anxiety is that we don't know enough about how the brain processes anxiety. This study opens up a new line of investigation into the brain circuitry that controls anxiety."

The team's findings are described in the January 30 version of the journal Cell.

Led by Todd Anthony, a senior research fellow at Caltech, the researchers decided to investigate the so-called septohippocampal axis because previous studies had implicated this circuit in anxiety, and had also shown that neurons in a structure located within this axis—the LS—lit up, or were activated, when anxious behavior was induced by stress in mouse models. But does the fact that the LS is active in response to stressors mean that this structure promotes anxiety, or does it mean that this structure acts to limit anxiety responses following stress? The prevailing view in the field was that the nerve pathways that connect the LS with different brain regions function as a brake on anxiety, to dampen a response to stressors. But the team's experiments showed that the exact opposite was true in their system.

In the new study, the team used optogenetics—a technique that uses light to control neural activity—to artificially activate a set of specific, genetically identified neurons in the LS of mice. During this activation, the mice became more anxious. Moreover, the researchers found that even a brief, transient activation of those neurons could produce a state of anxiety lasting for at least half an hour. This indicates that not only are these cells involved in the initial activation of an anxious state, but also that an anxious state persists even after the neurons are no longer being activated.

"The counterintuitive feature of these neurons is that even though activating them causes more anxiety, the neurons are actually inhibitory neurons, meaning that we would expect them to shut off other neurons in the brain," says Anderson, who is also an investigator with the Howard Hughes Medical Institute (HHMI).

So, if these neurons are shutting off other neurons in the brain, then how can they increase anxiety? The team hypothesized that the process might involve a double-inhibitory mechanism: two negatives make a positive. When they took a closer look at exactly where the LS neurons were making connections in the brain, they saw that they were inhibiting other neurons in a nearby area called the hypothalamus. Importantly, most of those hypothalamic neurons were, themselves, inhibitory neurons. Moreover, those hypothalamic inhibitory neurons, in turn, connected with a third brain structure called the paraventricular nucleus, or PVN. The PVN is well known to control the release of hormones like cortisol in response to stress and has been implicated in anxiety.

This anatomical circuit seemed to provide a potential double-inhibitory pathway through which activation of the inhibitory LS neurons could lead to an increase in stress and anxiety. The team reasoned that if this hypothesis were true, then artificial activation of LS neurons would be expected to cause an increase in stress hormone levels, as if the animal were stressed. Indeed, optogenetic activation of the LS neurons increased the level of circulating stress hormones, consistent with the idea that the PVN was being activated. Moreover, inhibition of LS projections to the hypothalamus actually reduced the rise in cortisol when the animals were exposed to stress. Together these results strongly supported the double-negative hypothesis.

"The most surprising part of these findings is that the outputs from the LS, which were believed primarily to act as a brake on anxiety, actually increase anxiety," says Anderson.

Knowing the sign—positive or negative—of the effect of these cells on anxiety, he says, is a critical first step to understanding what kind of drug one might want to develop to manipulate these cells or their molecular constituents. If the cells had been found to inhibit anxiety, as originally thought, then one would want to find drugs that activate these LS neurons, to reduce anxiety. However, since the group found that these neurons instead promote anxiety, then to reduce anxiety a drug would have to inhibit these neurons.

"We are still probably a decade away from translating this very basic research into any kind of therapy for humans, but we hope that the information that this type of study yields about the brain will put the field and medicine in a much better position to develop new, rational therapies for psychiatric disorders," says Anderson. "There have been very few new psychiatric drugs developed in the last 40 to 50 years, and that's because we know so little about the brain circuitry that controls the emotions that go wrong in a psychiatric disorder like depression or anxiety."

The team will continue to map out this area of the brain in greater detail to understand more about its role in controlling stress-induced anxiety.

"There is no shortage of new questions that have been raised by these findings," Anderson says. "It may seem like all that we've done here is dissect a tiny little piece of brain circuitry, but it's a foothold onto a very big mountain. You have to start climbing someplace."

Additional authors on the Cell paper, "Control of Stress-Induced Persistent Anxiety by an Extra-Amygdala Septohypothalamic Circuit," are Walter Lerchner from the National Institutes of Health (NIH), Nick Dee and Amy Bernard from the Allen Institute for Brain Science, and Nathaniel Heintz from The Rockefeller University and HHMI. The work was supported by NIH, HHMI, and the Beckman Institute at Caltech.

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

From Rivers to Landslides: Charting the Slopes of Sediment Transport

In the Earth Surface Dynamics Lab at the California Institute of Technology (Caltech) the behavior of rivers is modeled through the use of artificial rivers—flumes—through which water can be pumped at varying rates over a variety of carefully graded sediments while drag force and acceleration are measured. The largest flume is a 12-meter tilting version that can model many river conditions; another flume models the languid process of a nearly flat river bed forming a delta as it reaches a pool. Additional flumes are constructed in the lab on an as-needed basis, as in a recent study testing sediment transport in very steep channels.

One such newly constructed flume demonstrates that the slope of streambeds has dramatic and unexpected effects on sediment transport. Logic would suggest that steeper streambeds should allow for easy sediment transport since, as the angle of the slope increases, gravity should assist with moving water and sediment downstream. But experimental data from the flume lab show that gravity does not facilitate sediment transport in the expected manner. Furthermore, in very steep streambeds with a 22-degree or higher slope, sediment motion begins not with grains skipping and bouncing along the bottom of the streambed, but rather with a complete bed failure in which all the sediment is abruptly sent hurtling downstream as a debris flow.

"Most previous work was done on low-gradient channels with a gentle slope," says Michael P. Lamb, assistant professor of geology at Caltech. "These are the rivers, like the Mississippi, where people live and pilot boats, and where we worry about flooding. Low-gradient channels have been studied by civil engineers for hundreds of years." Much less attention has been paid to steeper mountain channels, in part because they are more difficult to study. "Counterintuitively, in steep channels sediment rarely moves, and when it does it is extremely dangerous to measure since it typically includes boulders and large cobbles," explains Lamb.

And so Lamb, along with Caltech graduate student Jeff Prancevic and staff scientist Brian Fuller, set out to model the behavior of steep channels on an artificial watercourse—a flume—that they created for just this purpose. They intentionally removed key variables that occur in nature, such as unevenness in grain size and in the streambed itself (in steep channels there are often varying slopes with waterfalls and pools), so that they could concentrate solely on the effect of bed slope on sediment transport. They created a uniform layer of gravel on the bed of the flume and then began running water down it in increasing quantities, measuring how much water was required to initiate sediment motion. Gradually they tilted the flume to steeper angles, continuing to observe when and how sediment moved as water was added to the system.

Based on studies of sediment motion in low-gradient channels, geologists have long assumed that there is a linear relation between a watercourse's slope and the stress placed by water and gravity on the streambed. That is, as the angle of the streambed increases, the quantity of water required to move sediment should decrease in a simple 1-to-1 ratio. Lamb and Prancevic's flume experiments did indeed show that steeper slopes require less water to move sediment than flatter streambeds. But contrary to earlier predictions, one cannot simply raise the slope by, say, 2 percent while decreasing the water depth by 2 percent and see the same pattern of sediment transport. Instead, as the flume tilted upward in these experiments, a proportionately greater amount of water was needed to initiate sediment motion. By the time the flume was tilted to a slope of 20 degrees, five times the depth of water as previously predicted was needed to move the gravel downstream.

At one level, this experimental data squares with field observations. "If you go out to the Mississippi," says Lamb, "sand is moving almost all the time along the bed of the river. But in mountain channels, the sediment that makes up the bed of the river very rarely moves except during extreme flood events. This sediment is inherently more stable, which is the opposite of what you might expect." The explanation for why this is the case seems to lie with the uneven terrain and shallow waters common to streams in steep mountain terrain.

Experiments with the tilting flume also allowed Lamb and Prancevic to simulate important transitions in sediment transport: from no motion at all, to normal fluvial conditions in which sediment rolls along the streambed, to bed failure, in which the entire sediment bed gives way in a debris flow, stripping the channel down to bedrock. The researchers found that with lower slopes, as the water discharge was increased, individual grains of sediment began to break free and tumble along the flume bed; this pattern is common to the sediment-movement processes of low-gradient riverbeds. As the slope increased, the sediment became more stable, requiring proportionately more water to begin sediment transport. Eventually, the slope reached a transition zone where regular river processes were completely absent. In these steeply sloped flumes, the first sediment motion that occurred represented a complete bed failure, in which all of the grains slid down the channel en masse. "This suggests that there's a certain slope, around 22 degrees in our experiments, where sediment is the most stable, but these channel slopes are also potentially the most dangerous because here the sediment bed can fail catastrophically in rare, large-magnitude flood events," Lamb explains.

Researchers previously believed that debris flows in mountain terrain primarily derived from rainfall-triggered landslides flowing into watercourses from surrounding hillsides. However, the flume-lab experiments suggest that a debris flow can occur in a steep river channel in the absence of such a landslide, simply as a result of increased water discharge over the streambed.

"Understanding when and how sediment first moves at different channel slopes can be used to predict the occurrence of debris flows which affect people and infrastructure," Lamb says. There are other, wide-ranging implications. For example, some fish, like salmon, build their nests only in gravel of a certain size, he notes, and so, "as rivers are increasingly being restored for fish habitat, it is important to know what slopes and flow depths will preserve a particular size of gravel on the riverbed." In addition, he adds, "a better understanding of sediment transport can be used to reconstruct environments of Earth's past or on other planets, such as Mars, through observations of previously moved sediment, now preserved in deposits."

The paper, "Incipient sediment motion across the river to debris-flow transition," appears in the journal Geology. Funding was provided by the National Science Foundation, the Terrestrial Hazard Observation and Reporting Center at Caltech, and the Keck Institute for Space Studies.

Writer: 
Cynthia Eller
Writer: 
Exclude from News Hub: 
No

Galaxies on FIRE: Star Feedback Results in Less Massive Galaxies

For decades, astrophysicists have encountered a puzzling contradiction: although many galactic-wind models—simulations of how matter is distributed in our universe—predict that the majority of the "normal" matter exists in stars at the center of galaxies, in actuality these stars account for less than 10 percent of the matter in the universe. A new set of simulations offer insight into this mismatch between the models and reality: the energy released by individual stars within galaxies can have a substantial effect on where matter is located in the universe.

The Feedback in Realistic Environments, or FIRE, project is the culmination of a multiyear, multiuniversity effort that—for the first time—simulates the evolution of galaxies from shortly after the Big Bang through today. The first simulation to factor in the realistic effects of stars on their galaxies, FIRE results suggest that the radiation from stars is powerful enough to push matter out of galaxies. And this push is enough to account for the "missing" galactic mass in previous calculations, says Philip Hopkins, assistant professor of theoretical astrophysics at the California Institute of Technology (Caltech) and lead author of a paper resulting from the project.

"People have guessed for a long time that the 'missing physics' in these models was what we call feedback from stars," Hopkins says. "When stars form, they should have a dramatic impact on the galaxies in which they arise, through the radiation they emit, the winds they blow off of their surfaces, and their explosions as supernovae. Previously, it has not been possible to directly follow any of these processes within a galaxy, so the earlier models simply estimated—indirectly—the impact of these effects."

By incorporating the data of individual stars into whole-galaxy models, Hopkins and his colleagues can look at the actual effects of star feedback—how radiation from stars "pushes" on galactic matter—in each of the galaxies they study. With new and improved computer codes, Hopkins and his colleagues can now focus their model on specific galaxies, using what are called zoom-in simulations. "Zoom-in simulations allow you to 'cut out' and study just the region of the universe—a few million light-years across, for example—around what's going to become the galaxy you care about," he says. "It would be crazy expensive to run simulations of the entire universe—about 50 billion light-years across—all at once, so you just pick one galaxy at a time, and you concentrate all of your resolution there."

A zoomed-in view of evolving stars within galaxies allows the researchers to see the radiation from stars and supernovae explosions blowing large amounts of material out of those galaxies. When they calculate the amount of matter lost from the galaxies during these events, that feedback from stars in the simulation accurately accounts for the low masses that have been actually observed in real galaxies. "The big thing that we are able to explain is that real galaxies are much less massive than they would be if these feedback processes weren't operating," he says. "So if you care about the structure of a galaxy, you really need to care about star formation and supernovae—and the effect of their feedback on the galaxy."

But once stars push this matter out of the galaxy, where does it go?

That's a good question, Hopkins says—and one that the researchers hope to answer by combining their simulations with new observations in the coming months.

"Stars and supernovae seem to produce these galactic superwinds that blow material out into what we call the circum- and intergalactic medium—the space around and between galaxies. It's really timely for us because there are a lot of new observations of the gas in this intergalactic medium right now, many of them coming from Caltech," Hopkins says. "For example, people have recently found that there are more heavy elements floating around a couple hundred thousand light-years away from a galaxy than are actually inside the galaxy itself. You can track the lost matter by finding these heavy elements; we know they are only made in the fusion in stars, so they had to be inside a galaxy at some point. This fits in with our picture and we can now actually start to map out where this stuff is going."

Although the FIRE simulations can accurately account for the low mass of small- to average-size galaxies, the physics included, as in previous models, can't explain all of the missing mass in very large galaxies—like those larger than our Milky Way. Hopkins and his colleagues have hypothesized that black holes at the centers of these large galaxies might release enough energy to push out the rest of the matter not blown out by stars. "The next step for the simulations is accounting for the energy from black holes that we've mostly ignored for now," he says.

The information provided by the FIRE simulations shows that feedback from stars can alter the growth and history of galaxies in a much more dramatic way than anyone had previously anticipated, Hopkins says. "We've just begun to explore these new surprises, but we hope that these new tools will enable us to study a whole host of open questions in the field."

These results were submitted to the Monthly Notices of the Royal Astronomical Society on November 8, 2013 in a paper titled "Galaxies on FIRE (Feedback In Realistic Environments): Stellar Feedback Explains Cosmologically Inefficient Star Formation." In addition to Hopkins, other authors on the paper include Duìan Kereì, UC San Diego; José Oñorbe and James S. Bullock, UC Irvine; Claude-André Faucher-Giguère, Northwestern University; Eliot Quataert, UC Berkeley; and Norman Murray, the Canadian Institute for Theoretical Astrophysics. Hopkins's work was funded by the National Science Foundation and a NASA Einstein Postdoctoral Fellowship, as well as the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Fighting Flies

Caltech biologists identify sex-specific brain cells in male flies that promote aggression

When one encounters a group of fruit flies invading their kitchen, it probably appears as if the whole group is vying for a sweet treat. But a closer look would likely reveal the male flies in the group are putting up more of a fight, particularly if ripe fruit or female flies are present. According to the latest studies from the fly laboratory of California Institute of Technology (Caltech) biologist David Anderson, male Drosophilae, commonly known as fruit flies, fight more than their female counterparts because they have special cells in their brains that promote fighting. These cells appear to be absent in the brains of female fruit flies.  

"The sex-specific cells that we identified exert their effects on fighting by releasing a particular type of neuropeptide, or hormone, that has also been implicated in aggression in mammals including mouse and rat," says Anderson, the Seymour Benzer Professor of Biology at Caltech, and corresponding author of the study. "In addition, there are some recent papers implicating increased levels of this hormone in people with personality disorders that lead to higher levels of aggression."

The team's findings are outlined in the January 16 version of the journal Cell.

At first glance, a fruit fly may seem nothing like a human being. But look much closer, at a genetic level, and you will find that many of the genes seen in these flies are also present—and play similar roles—in humans. However, while such conservation holds for genes involved in basic cellular functions and in development, whether it was also true for genes controlling complex social behaviors like aggression was far from clear.

"Our studies are the first, to our knowledge, to identify a gene that plays a conserved role in aggression all the way from flies to humans," explains Anderson, who is also a Howard Hughes Medical Institute investigator. If that is true for one such gene, it is also is likely true for others, Anderson says. "Our study validates using fruit flies as a model to discover new genes that may also control aggression in humans."

The less-complex nervous system of the fruit fly makes them easier to study than people or even mice, another genetic model organism. For this particular study, the research team created a small library consisting of different fly lines; in each line, a different set of specific neurons was genetically labeled and could be artificially activated, with each neuron type secreting a different neuropeptide. Forty such lines were tested for their ability to increase aggression when their labeled neurons were activated. The one that produced the most dramatic increase in aggression had neurons expressing a particular neuropeptide called tachykinin, or Tk.

Next, Anderson and his colleagues used a set of genetic tools to identify exactly which neurons were responsible for the effect on aggression and to see if the gene that encodes for Tk also controls aggressive behavior by acting in that cell.

"We had to winnow away the different cells to find exactly which ones were involved in aggression—that's how we discovered that within this line, there was a male-specific set of neurons that was responsible for increased aggressive behavior," explains Kenta Asahina, a postdoctoral scholar in Anderson's lab and lead author of the study. Male-specific neurons controlling courtship behavior had previously been identified in flies, but this was the first time a male-specific neuron was found that specifically controls aggression. Having identified that neuron, the team was then able to modify its gene expression. Says Asahina, "We found that if you overproduce the gene in that cell and then stimulate the cell, you get an even stronger effect to promote aggression than if you stimulate the cell without overproducing the gene."

In fact, combining cell activation and the overproduction of the neuropeptide, which is released when the cell is activated, caused the flies to attack targets they normally would not. For example, when the researchers eliminated cues that normally promote aggression in a target fly — such as pheromones — the flies containing the hyperactivated "aggression" neurons attacked those targets despite the absence of the cues.

Moreover, this combined activation of the cell and the gene produced such a strong effect that the researchers were even able to get a fly to attack an inanimate object—a fly-sized magnet—when it was moved around in an arena.

Such behavior had never been observed previously. "A normal fly will chase the magnet, but will never attack the magnet," Asahina explains. "By over-activating these neurons, we are able to get the fly to attack an object that displays none of the normal signals that are required to elicit aggression from another fly."

"These results suggest that what these neurons are doing is promoting a state of aggressive arousal in the fly," Anderson says. "This elevated level of aggressiveness drives the fly to attack targets it would normally ignore. I wouldn't anthropomorphize the fly and say that it has increased 'anger,' but activating these neurons greatly lowers its threshold for attack."

The finding that these neurons are present in the brains of male but not female flies indicates that this sex difference in aggressive behavior is genetically based. At the same time, Asahina stresses, finding a gene that influences aggression does not mean that aggression is controlled only by genes and always genetically programmed.

"This is a very important distinction, because when people hear about a gene implicated in behavior, they automatically think it means that the behavior is genetically determined. But that is not necessarily the case," he says. "The key point here is that we can say something about how the gene acts to influence this behavior—that is, is by functioning as a chemical messenger in cells that control this behavior in the brain. We've been able to study the problem of aggressive behavior at two levels, the cell level and the gene level, and to link those studies together by genetic experiments."

This research, Anderson says, has given his team a beachhead into the circuitry in the fly brain that controls aggression, a behavior that they will continue to try to decode.

"We have to use this point of entry to discover the larger circuit in which those cells function," Anderson says. "If aggression is like a car, and if more aggression is like a car going faster, we want to know if what we're doing when we trigger these cells is stepping on the gas or taking the foot off the brake. And we want to know where and how that's happening in the brain. That's going to take a lot of work."

Additional Caltech authors on the Cell paper, "Male-specific Tachykinin-expressing neurons control sex differences in levels of aggressiveness in Drosophila," are Kiichi Watanabe, Brian J. Duistermars, Eric Hoopfer, Carlos Roberto González, Eyrún Arna Eyjólfsdóttir, and Pietro Perona. Their work was supported by the National Institutes of Health, a grant from the Gordon and Betty Moore Foundation, the Japan Society for the Promotion of Science, and the Howard Hughes Medical Institute.

Writer: 
Katie Neith
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Bacterial "Syringe" Necessary for Marine Animal Development

If you've ever slipped on a slimy wet rock at the beach, you have bacteria to thank. Those bacteria, nestled in a supportive extracellular matrix, form bacterial biofilms—often slimy substances that cling to wet surfaces. For some marine organisms—like corals, sea urchins, and tubeworms—these biofilms serve a vital purpose, flagging suitable homes for such organisms and actually aiding the transformation of larvae to adults.

A new study at the California Institute of Technology (Caltech) is the first to describe a mechanism for this phenomenon, providing one explanation for the relationship between bacterial biofilms and the metamorphosis of marine invertebrates. The results were published online in the January 9 issue of Science Express.

The study focused on a marine invertebrate that has become a nuisance to the shipping industry since its arrival in U.S. waters during the last half century: the tubeworm Hydroides elegans. The larvae of the invasive pest swim free in the ocean until they come into contact with a biofilm-covered surface, such as a rock or a buoy—or the hull of a ship. After the tubeworm larvae come in contact with the biofilm, they develop into adult worms that anchor to the surface, creating hard, mineralized "tubes" around their bodies. These tubes, which often cover the bottoms of ships, create extra drag in the water, dramatically increasing the ship's fuel consumption.

The tubeworms' unwanted and destructive presence on ships, called biofouling, is a "really bad problem," says Dianne Newman, a professor of biology and geobiology and Howard Hughes Medical Institute (HHMI) investigator at Caltech. "For example, biofouling costs the U.S. Navy millions of dollars every year in excess fuel costs," says Newman, who is also a coauthor of the study. And although researchers have known for decades that biofilms are necessary for tubeworm development, says Nicholas Shikuma, one of the two first authors on the study and a postdoctoral scholar in Newman's laboratory, "there was no mechanistic explanation for how bacteria can actually induce that process to happen. We wanted to provide that explanation."

Shikuma began by investigating Pseudoalteromonas luteoviolacea, a bacterial species known to induce metamorphosis in the tubeworm and other marine invertebrates. In earlier work, Michael G. Hadfield of the University of Hawai'i at Mānoa, a coauthor of the Science Express paper, had identified a group of P. luteoviolacea genes that were necessary for tubeworm metamorphosis. Near those genes, Shikuma found a set of genes that produced a structure similar to the tail of bacteriophage viruses.

The tails of these phage viruses contain three main components: a projectile tube, a contractile sheath that deploys the tube, and an anchoring baseplate. Together, the phage uses these tail components as a syringe, injecting their genetic material into host bacteria cells, infecting—and ultimately killing—them. To determine if the phage tail-like structures in P. luteoviolacea played a role in tubeworm metamorphosis, the researchers systematically deleted the genes encoding each of these three components.

Electron microscope images of the bacteria confirmed that syringe-like structures were present in "normal" P. luteoviolacea cells but were absent in cells in which the genes encoding the three structural components had been deleted; these genes are known as metamorphosis-associated contractile structure (mac) genes. The researchers also discovered that the bacterial cells lacking mac genes were unable to induce metamorphosis in tubeworm larvae. Previously, the syringe-like structures had been found in other species of bacteria, but in these species, the tails were deployed to kill other bacteria or insects. The new study provides the first evidence of such structures benefitting another organism, Shikuma says.

In order to view the three-dimensional arrangement of these unique structures within intact bacteria, the researchers collaborated with the laboratory of Grant Jensen, professor of biology and HHMI investigator at Caltech. Utilizing a technique called electron cryotomography, the researchers flash-froze the bacterial cells at very low temperatures. This allowed them to view the cells and their internal structures in their natural, "near-native" states.

Using this visualization technique, Martin Pilhofer, a postdoctoral scholar in Jensen's lab and the paper's other first author, discovered something unique about the phage tail-like structures within P. luteoviolacea; instead of existing as individual appendages, the structures were linked together to create a spiny array. "In these arrays, about 100 tails are stuck together in a hexagonal lattice to form a complex with a porcupine-like appearance," Shikuma says. "They're all facing outward, poised to fire," he adds. "We believe this is the first observation of arrays of phage tail-like structures."

Initially, the array is compacted within each bacterium; however, the cells eventually burst—killing the microbes—and the array unfolds. The researchers hypothesize that, at this point, the individual spines of the array fire outward into the tubeworm larva. Following this assault, the larvae begin their developmental transition to adulthood.

"It was a tremendous surprise that the agent that drives metamorphosis is such an elaborate, well-organized injection machine," says coauthor Jensen. "Who would have guessed that the signal is delivered by an apparatus that is almost as large as the bacterial cell itself? It is simply a marvelous structure, synthesized in a 'loaded' but tightly collapsed state within the cell, which then expands like an umbrella, opening up into a much larger web of syringes that are ready to inject," he says.

Although the study confirms that the phage tail-like structures can cause tubeworm metamorphosis, the nature of the interaction between the tail and the tubeworm is still unknown, Shikuma says. "Our next step is to determine whether metamorphosis is caused by an injection into the tubeworm larva tissue, and, then, if the mechanical action is the trigger, or if the bacterium is injecting a chemical morphogen," he says. He and his colleagues would also like to determine if mac genes and the tail-like structures they encode might influence other marine invertebrates, such as corals and sea urchins, that also rely on P. luteoviolacea biofilms for metamorphosis.

Understanding this process might one day help reduce the financial losses from P. luteoviolacea biofilm fouling on ship hulls, for example. While applications are a long way off, Newman says, it is also interesting to speculate on the possibility of leveraging metamorphosis induction in beneficial marine invertebrates to improve yields in aquaculture and promote coral reef growth.

The study, the researchers emphasize, is an example of the collaborative research that is nurtured at Caltech. For his part, Shikuma was inspired to utilize electron cryotomography after hearing a talk by Martin Pilhofer at the Center for Environmental Microbiology Interactions (CEMI) at Caltech. "Martin gave a presentation on another type of phage tail-like structures in the monthly CEMI seminar. I saw his talk and I thought that the mac genes I was working with might somehow be related," Shikuma says. Their subsequent collaboration, Newman says, made the current study possible.

The paper is titled "Marine tubeworm metamorphosis induced by arrays of bacterial phage tail-like structures." Gregor L. Weiss, a Summer Undergraduate Research Fellowship student in Jensen's laboratory at Caltech, was an additional coauthor on the study. The published work was funded by a Caltech Division of Biology Postdoctoral Fellowship (to N. Shikuma), the Caltech CEMI, the Howard Hughes Medical Institute, the Office of Naval Research, the National Institutes of Health, and the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No

Assessing Others: Evaluating the Expertise of Humans and Computer Algorithms

How do we come to recognize expertise in another person and integrate new information with our prior assessments of that person's ability? The brain mechanisms underlying these sorts of evaluations—which are relevant to how we make decisions ranging from whom to hire, whom to marry, and whom to elect to Congress—are the subject of a new study by a team of neuroscientists at the California Institute of Technology (Caltech).

In the study, published in the journal Neuron, Antonio Rangel, Bing Professor of Neuroscience, Behavioral Biology, and Economics, and his associates used functional magnetic resonance imaging (fMRI) to monitor the brain activity of volunteers as they moved through a particular task. Specifically, the subjects were asked to observe the shifting value of a hypothetical financial asset and make predictions about whether it would go up or down. Simultaneously, the subjects interacted with an "expert" who was also making predictions.

Half the time, subjects were shown a photo of a person on their computer screen and told that they were observing that person's predictions. The other half of the time, the subjects were told they were observing predictions from a computer algorithm, and instead of a face, an abstract logo appeared on their screen. However, in every case, the subjects were interacting with a computer algorithm—one programmed to make correct predictions 30, 40, 60, or 70 percent of the time.

Subjects' trust in the expertise of agents, whether "human" or not, was measured by the frequency with which the subjects made bets for the agents' predictions, as well as by the changes in those bets over time as the subjects observed more of the agents' predictions and their consequent accuracy.

This trust, the researchers found, turned out to be strongly linked to the accuracy of the subjects' own predictions of the ups and downs of the asset's value.

"We often speculate on what we would do in a similar situation when we are observing others—what would I do if I were in their shoes?" explains Erie D. Boorman, formerly a postdoctoral fellow at Caltech and now a Sir Henry Wellcome Research Fellow at the Centre for FMRI of the Brain at the University of Oxford, and lead author on the study. "A growing literature suggests that we do this automatically, perhaps even unconsciously."

Indeed, the researchers found that subjects increasingly sided with both "human" agents and computer algorithms when the agents' predictions matched their own. Yet this effect was stronger for "human" agents than for algorithms.

This asymmetry—between the value placed by the subjects on (presumably) human agents and on computer algorithms—was present both when the agents were right and when they were wrong, but it depended on whether or not the agents' predictions matched the subjects'. When the agents were correct, subjects were more inclined to trust the human than algorithm in the future when their predictions matched the subjects' predictions. When they were wrong, human experts were easily and often "forgiven" for their blunders when the subject made the same error. But this "benefit of the doubt" vote, as Boorman calls it, did not extend to computer algorithms. In fact, when computer algorithms made inaccurate predictions, the subjects appeared to dismiss the value of the algorithm's future predictions, regardless of whether or not the subject agreed with its predictions.

Since the sequence of predictions offered by "human" and algorithm agents was perfectly matched across different test subjects, this finding shows that the mere suggestion that we are observing a human or a computer leads to key differences in how and what we learn about them.

A major motivation for this study was to tease out the difference between two types of learning: what Rangel calls "reward learning" and "attribute learning." "Computationally," says Boorman, "these kinds of learning can be described in a very similar way: We have a prediction, and when we observe an outcome, we can update that prediction."

Reward learning, in which test subjects are given money or other valued goods in response to their own successful predictions, has been studied extensively. Social learning—specifically about the attributes of others (or so-called attribute learning)—is a newer topic of interest for neuroscientists. In reward learning, the subject learns how much reward they can obtain, whereas in attribute learning, the subject learns about some characteristic of other people.

This self/other distinction shows up in the subjects' brain activity, as measured by fMRI during the task. Reward learning, says Boorman, "has been closely correlated with the firing rate of neurons that release dopamine"—a neurotransmitter involved in reward-motivated behavior—and brain regions to which they project, such as the striatum and ventromedial prefrontal cortex. Boorman and colleagues replicated previous studies in showing that this reward system made and updated predictions about subjects' own financial reward. Yet during attribute learning, another network in the brain—consisting of the medial prefrontal cortex, anterior cingulate gyrus, and temporal parietal junction, which are thought to be a critical part of the mentalizing network that allows us to understand the state of mind of others—also made and updated predictions, but about the expertise of people and algorithms rather than their own profit.

The differences in fMRIs between assessments of human and nonhuman agents were subtler. "The same brain regions were involved in assessing both human and nonhuman agents," says Boorman, "but they were used differently."

"Specifically, two brain regions in the prefrontal cortex—the lateral orbitofrontal cortex and medial prefrontal cortex—were used to update subjects' beliefs about the expertise of both humans and algorithms," Boorman explains. "These regions show what we call a 'belief update signal.'" This update signal was stronger when subjects agreed with the "human" agents than with the algorithm agents and they were correct. It was also stronger when they disagreed with the computer algorithms than when they disagreed with the "human" agents and they were incorrect. This finding shows that these brain regions are active when assigning credit or blame to others.

"The kind of learning strategies people use to judge others based on their performance has important implications when it comes to electing leaders, assessing students, choosing role models, judging defendents, and so on," Boorman notes. Knowing how this process happens in the brain, says Rangel, "may help us understand to what extent individual differences in our ability to assess the competency of others can be traced back to the functioning of specific brain regions."

The study, "The Behavioral and Neural Mechanisms Underlying the Tracking of Expertise," was also coauthored by John P. O'Doherty, professor of psychology and director of the Caltech Brain Imaging Center, and Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology. The research was supported by the National Science Foundation, the National Institutes of Health, the Betty and Gordon Moore Foundation, the Lipper Foundation, and the Wellcome Trust.

Writer: 
Cynthia Eller
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Megafloods: What They Leave Behind

South-central Idaho and the surface of Mars have an interesting geological feature in common: amphitheater-headed canyons. These U-shaped canyons with tall vertical headwalls are found near the Snake River in Idaho as well as on the surface of Mars, according to photographs taken by satellites. Various explanations for how these canyons formed have been offered—some for Mars, some for Idaho, some for both—but in a paper published the week of December 16 in the online issue of Proceedings of the National Academy of Sciences, Caltech professor of geology Michael P. Lamb, Benjamin Mackey, formerly a postdoctoral fellow at Caltech, and W. M. Keck Foundation Professor of Geochemistry Kenneth A. Farley offer a plausible account that all these canyons were created by enormous floods.

Canyons in Malad Gorge State Park, Idaho, are carved into a relatively flat plain composed of a type of volcanic rock known as basalt. The basalt originated from a hotspot, located in what is now Yellowstone Park, which has been active for the last few million years. Two canyons in Malad Gorge, Woody's Cove and Stubby Canyon, are characterized by tall vertical headwalls, roughly 150 feet high, that curve around to form an amphitheater. Other amphitheater-headed canyons can be found nearby, outside the Gorge—Box Canyon, Blue Lakes Canyon, and Devil's Corral—and also elsewhere on Earth, such as in Iceland.

To figure out how they formed, Lamb and Mackey conducted field surveys and collected rock samples from Woody's Cove, Stubby Canyon, and a third canyon in Malad Gorge, known as Pointed Canyon. As its name indicates, Pointed Canyon ends not in an amphitheater but in a point, as it progressively narrows in the upstream direction toward the plateau at an average 7 percent grade. Through Pointed Canyon flows the Wood River, a tributary of the larger Snake River, which in turn empties into the Columbia River on its way to the Pacific Ocean.

Geologists have a good understanding of how the rocks in Woody's Cove and Stubby Canyon achieved their characteristic appearance. The lava flows that hardened into basalt were initially laid down in layers, some more than six feet thick. As the lava cooled, it contracted and cracked, just as mud does when it dries. This produced vertical cracks across the entire layer of lava-turned-basalt. As each additional sheet of lava covered the same land, it too cooled and cracked vertically, leaving a wall that, when exposed, looks like stacks of tall blocks, slightly offset from one another with each additional layer. This type of structure is called columnar basalt.

While the formation of columnar basalt is well understood, it is not clear how, at Woody's Cove and Stubby Canyon, the vertical walls became exposed or how they took on their curved shapes. The conventional explanation is that the canyons were formed via a process called "groundwater sapping," in which springs at the bottom of the canyon gradually carve tunnels at the base of the rock wall until this undercutting destabilizes the structure so much that blocks or columns of basalt fall off from above, creating the amphitheater below.

This explanation has not been corroborated by the Caltech team's observations, for two reasons. First, there is no evidence of undercutting, even though there are existing springs at the base of Woody's Cove and Stubby Canyon. Second, undercutting should leave large boulders in place at the foot of the canyon, at least until they are dissolved or carried away by groundwater. "These blocks are too big to move by spring flow, and there's not enough time for the groundwater to have dissolved them away," Lamb explains, "which means that large floods are needed to move them out. To make a canyon, you have to erode the canyon headwall, and you also have to evacuate the material that collapses in."

That leaves waterfall erosion during a large flood event as the only remaining candidate for the canyon formation that occurred in Malad Gorge, the Caltech team concludes.

No water flows over the top of Woody's Cove and Stubby Canyon today. But even a single incident of overland water flow occurring during an unusually large flood event could pluck away and topple boulders from the columnar basalt, taking advantage of the vertical fracturing already present in the volcanic rock. A flood of this magnitude could also carry boulders downstream, leaving behind the amphitheater canyons we see today without massive boulder piles at their bottoms and with no existing watercourses.

Additional evidence that at some point in the past water flowed over the plateaus near Woody's Cove and Stubby Canyon are the presence of scour marks on surface rocks on the plateau above the canyons. These scour marks are evidence of the type of abrasion that occurs when a water discharge containing sediment moves overland.

Taken together, the evidence from Malad Gorge, Lamb says, suggests that "amphitheater shapes might be diagnostic of very large-scale floods, which would imply much larger water discharges and much shorter flow durations than predicted by the previous groundwater theory." Lamb points out that although groundwater sapping "is often assumed to explain the origin of amphitheater-headed canyons, there is no place on Earth where it has been demonstrated to work in columnar basalt."

Closing the case on the canyons at Malad Gorge required one further bit of information: the ages of the rock samples. This was accomplished at Caltech's Noble Gas Lab, run by Kenneth A. Farley, W. M. Keck Foundation Professor of Geochemistry and chair of the Division of Geological and Planetary Sciences.

The key to dating surface rocks on Earth is cosmic rays—very high-energy particles from space that regularly strike Earth. "Cosmic rays interact with the atmosphere and eventually with rocks at the surface, producing alternate versions of noble gas elements, or isotopes, called cosmogenic nuclides," Lamb explains. "If we know the cosmic-ray flux, and we measure the accumulation of nuclides in a certain mineral, then we can calculate the time that rock has been sitting at Earth's surface."

At the Noble Gas Lab, Farley and Mackey determined that rock samples from the heads of Woody's Cove and Stubby Canyon had been exposed for the same length of time, approximately 46,000 years. If Lamb and his colleagues are correct, this is when the flood event occurred that plucked the boulders off the canyon walls, leaving the amphitheaters behind.

Further evidence supporting the team's theory can be found in Pointed Canyon. Rock samples collected along the walls of the first kilometer of the canyon show progressively more exposure in the downstream direction, suggesting that the canyon is still being carved by Wood River. Using the dates of exposure revealed in the rock samples, Lamb reconstructed the probable location of Pointed Canyon at the time of the formation of Woody's Cove and Stubby Canyon. At that location, where the rock has been exposed approximately 46,000 years, the surrounding canyon walls form the characteristic U-shape of an amphitheater-headed canyon and then abruptly narrow into the point that forms the remainder of Pointed Canyon. "The same megaflood event that created Woody's Cove and Stubby Canyon seems to have created Pointed Canyon," Lamb concludes. "The only difference is that the other canyons had no continuing river action, while Pointed Canyon was cut relatively slowly over the last 46,000 years by the Wood River, which is not powerful enough to topple and pluck basalt blocks from the surrounding plateau, resulting in a narrow channel rather than tall vertical headwalls."

Solving the puzzle of how amphitheater-headed canyons are created has implications reaching far beyond south-central Idaho because similar features—though some much larger—are also present on the surface of Mars. "A very popular interpretation for the amphitheater-headed canyons on Mars is that groundwater seeps out of cracks at the base of the canyon headwalls and that no water ever went over the top," Lamb says. Judging from the evidence in Idaho, however, it seems more likely that on Mars, as on Earth, amphitheater-headed canyons were created by enormous flood events, suggesting that Mars was once a very watery planet.

The paper presenting these results is entitled "Amphitheater-Headed Canyons Formed by Megaflooding at Malad Gorge, Idaho." The work was supported by grants from the National Science Foundation and NASA.

Writer: 
Cynthia Eller
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

First Rock Dating Experiment Performed on Mars

Although researchers have determined the ages of rocks from other planetary bodies, the actual experiments—like analyzing meteorites and moon rocks—have always been done on Earth. Now, for the first time, researchers have successfully determined the age of a Martian rock—with experiments performed on Mars. The work, led by geochemist Ken Farley of the California Institute of Technology (Caltech), could not only help in understanding the geologic history of Mars but also aid in the search for evidence of ancient life on the planet.

Many of the experiments carried out by the Mars Science Laboratory (MSL) mission's Curiosity rover were painstakingly planned by NASA scientists more than a decade ago. However, shortly before the rover left Earth in 2011, NASA's participating scientist program asked researchers from all over the world to submit new ideas for experiments that could be performed with the MSL's already-designed instruments. Farley, W.M. Keck Foundation Professor of Geochemistry and one of the 29 selected participating scientists, submitted a proposal that outlined a set of techniques similar to those already used for dating rocks on Earth, to determine the age of rocks on Mars. Findings from the first such experiment on the Red Planet—published by Farley and coworkers this week in a collection of Curiosity papers in the journal Science Express—provide the first age determinations performed on another planet.

The paper is one of six appearing in the journal that reports results from the analysis of data and observations obtained during Curiosity's exploration at Yellowknife Bay—an expanse of bare bedrock in Gale Crater about 500 meters from the rover's landing site. The smooth floor of Yellowknife Bay is made up of a fine-grained sedimentary rock, or mudstone, that researchers think was deposited on the bed of an ancient Martian lake.

In March, Curiosity drilled holes into the mudstone and collected powdered rock samples from two locations about three meters apart. Once the rock samples were drilled, Curiosity's robotic arm delivered the rock powder to the Sample Analysis on Mars (SAM) instrument, where it was used for a variety of chemical analyses, including the geochronology—or rock dating—techniques.

One technique, potassium-argon dating, determines the age of a rock sample by measuring how much argon gas it contains. Over time, atoms of the radioactive form of potassium—an isotope called potassium-40—will decay within a rock to spontaneously form stable atoms of argon-40. This decay occurs at a known rate, so by determining the amount of argon-40 in a sample, researchers can calculate the sample's age.

Although the potassium-argon method has been used to date rocks on Earth for many decades, these types of measurements require sophisticated lab equipment that could not easily be transported and used on another planet. Farley had the idea of performing the experiment on Mars using the SAM instrument. There, the sample was heated to temperatures high enough that the gasses within the rock were released and could be analyzed by an onboard mass spectrometer.

Farley and his colleagues determined the age of the mudstone to be about 3.86 to 4.56 billion years old. "In one sense, this is an utterly unsurprising result—it's the number that everybody expected," Farley says.

Indeed, prior to Curiosity's geochronology experiment, researchers using the "crater counting" method had estimated the age of Gale Crater and its surroundings to be between 3.6 and 4.1 billion years old. Crater counting relies on the simple fact that planetary surfaces are repeatedly bombarded with objects that scar their surface with impact craters; a surface with many impact craters is presumed to be older than one with fewer craters. Although this method is simple, it has large uncertainties.

"What was surprising was that our result—from a technique that was implemented on Mars with little planning on Earth—got a number that is exactly what crater counting predicted," Farley says. "MSL instruments weren't designed for this purpose, and we weren't sure if the experiment was going to work, but the fact that our number is consistent with previous estimates suggests that the technique works, and it works quite well."

The researchers do, however, acknowledge that there is some uncertainty in their measurement. One reason is that mudstone is a sedimentary rock—formed in layers over a span of millions of years from material that eroded off of the crater walls—and thus the age of the sample drilled by Curiosity really represents the combined age of those bits and pieces. So while the mudstone indicates the existence of an ancient lake—and a habitable environment some time in the planet's distant past—neither crater counting nor potassium-argon dating can directly determine exactly when this was.

To provide an answer for how the geology of Yellowknife Bay has changed over time, Farley and his colleagues also designed an experiment using a method called surface exposure dating. "The surface of Mars, the surface of Earth, and basically all surfaces in the solar system are being bombarded by cosmic rays," explains Farley, and when these rays—very high-energy protons—blast into an atom, the atom's nucleus shatters, creating isotopes of other elements. Cosmic rays can only penetrate about two to three meters below the surface, so the abundance of cosmic-ray-debris isotopes in rock indicates how long that rock has been on the surface.

Using the SAM mass spectrometer to measure the abundance of three isotopes that result from cosmic-ray bombardment—helium-3, neon-21, and argon-36—Farley and his colleagues calculated that the mudstone at Yellowknife Bay has been exposed at the surface for about 80 million years. "All three of the isotopes give exactly the same answer; they all have their independent sources of uncertainty and complications, but they all give exactly the same answer. That is probably the most remarkable thing I've ever seen as a scientist, given the difficulty of the analyses," Farley says.

This also helps researchers looking for evidence of past life on Mars. Cosmic rays are known to degrade the organic molecules that may be telltale fossils of ancient life. However, because the rock at Yellowknife Bay has only been exposed to cosmic rays for 80 million years—a relatively small sliver of geologic time—"the potential for organic preservation at the site where we drilled is better than many people had guessed," Farley says.

Furthermore, the "young" surface exposure offers insight into the erosion history of the site. "When we first came up with this number, the geologists said, 'Yes, now we get it, now we understand why this rock surface is so clean and there is no sand or rubble,'" Farley says. 

The exposure of rock in Yellowknife Bay has been caused by wind erosion. Over time, as wind blows sand against the small cliffs, or scarps, that bound the Yellowknife outcrop, the scarps erode back, revealing new rock that previously was not exposed to cosmic rays.

"Imagine that you are in this site a hundred million years ago; the area that we drilled in was covered by at least a few meters of rock. At 80 million years ago, wind would have caused this scarp to migrate across the surface and the rock below the scarp would have gone from being buried—and safe from cosmic rays—to exposed," Farley explains. Geologists have developed a relatively well-understood model, called the scarp retreat model, to explain how this type of environment evolves. "That gives us some idea about why the environment looks like it does and it also gives us an idea of where to look for rocks that are even less exposed to cosmic rays," and thus are more likely to have preserved organic molecules, Farley says.

Curiosity is now long gone from Yellowknife Bay, off to new drilling sites on the route to Mount Sharp where more dating can be done. "Had we known about this before we left Yellowknife Bay, we might have done an experiment to test the prediction that cosmic-ray irradiation should be reduced as you go in the downwind direction, closer to the scarp, indicating a newer, more recently exposed rock, and increased irradiation when you go in the upwind direction, indicating a rock exposed to the surface longer ago," Farley says. "We'll likely drill in January, and the team is definitely focused on finding another scarp to test this on."

This information could also be important for Curiosity chief scientist John Grotzinger, Caltech's Fletcher Jones Professor of Geology. In another paper in the same issue of Science Express, Grotzinger—who studies the history of Mars as a habitable environment—and colleagues examined the physical characteristics of the rock layers in and near Yellowknife Bay. They concluded that the environment was habitable less than 4 billion years ago, which is a relatively late point in the planet's history.

"This habitable environment existed later than many people thought possible," Grotzinger says. His findings suggest that the surface water on Mars at that time would have been sufficient enough to make clays. Previously, such clays—evidence of a habitable environment—were thought to have washed in from older deposits. Knowing that the clays could be produced later in locations with surface water can help researchers pin down the best areas at which to look for once habitable environments, he says.

Farley's work is published in a paper titled "In-situ radiometric and exposure age dating of the Martian surface." Other Caltech coauthors on the study include Grotzinger, graduate student Hayden B. Miller, and Edward Stolper.

Images: 
Contact: 
Writer: 
Exclude from News Hub: 
No

Pages

Subscribe to RSS - research_news