Slimy Fish and the Origins of Brain Development

Lamprey—slimy, eel-like parasitic fish with tooth-riddled, jawless sucking mouths—are rather disgusting to look at, but thanks to their important position on the vertebrate family tree, they can offer important insights about the evolutionary history of our own brain development, a recent study suggests.

The work appears in a paper in the September 14 advance online issue of the journal Nature.

"Lamprey are one of the most primitive vertebrates alive on Earth today, and by closely studying their genes and developmental characteristics, researchers can learn more about the evolutionary origins of modern vertebrates—like jawed fishes, frogs, and even humans," says paper coauthor Marianne Bronner, the Albert Billings Ruddock Professor of Biology and director of Caltech's unique Zebrafish/Xenopus/Lamprey facility, where the study was done.

The facility is one of the few places in the world where lampreys can be studied in captivity. Although the parasitic lamprey are an invasive pest in the Great Lakes, they are difficult to study under controlled conditions; their lifecycle takes up to 10 years and they only spawn for a few short weeks in the summer before they die.

Each summer, Bronner and her colleagues receive shipments of wild lamprey from Michigan just before the prime of breeding season. When the lamprey arrive, they are placed in tanks where the temperature of the water is adjusted to extend the breeding season from around three weeks to up to two months. In those extra weeks, the lamprey produce tens of thousands of additional eggs and sperm, which, via in vitro fertilization, generate tens of thousands of additional embryos for study. During this time, scientists from all over the world come to Caltech to perform experiments with the developing lamprey embryos.

In the current study, Bronner and her collaborators—who traveled to Caltech from Stower's Institute for Medical Research in Kansas City, Missouri—studied the origins of the vertebrate hindbrain.

The hindbrain is a part of the central nervous system common to chordates—or organisms that have a nerve cord like our spinal cord. During the development of vertebrates—a subtype of chordates that have backbones—the hindbrain is compartmentalized into eight segments, each of which becomes uniquely patterned to establish networks of neuronal circuits. These segments eventually give rise to adult brain regions like the cerebellum, which is important for motor control, and the medulla oblongata, which is necessary for breathing and other involuntary functions.

However, this segmentation is not present in so-called "invertebrate chordates"—a grouping of chordates that lack a backbone, such as sea squirts and lancelets.

"The interesting thing about lampreys is that they occupy an intermediate evolutionary position between the invertebrate chordates and the jawed vertebrates," says Hugo Parker, a postdoc at Stower's Institute and first author on the study. "By investigating aspects of lamprey embryology, we can get a picture of how vertebrate traits might have evolved."

In the vertebrates, segmental patterning genes called Hox genes help to determine the animal's head-to-tail body plan—and those same Hox genes also control the segmentation of the hindbrain. Although invertebrate chordates also have Hox genes, these animals don't have segmented hindbrains. Because lampreys are centered between these two types of organisms on the evolutionary tree, the researchers wanted to know whether or not Hox genes are involved in patterning of the lamprey hindbrain.

To their surprise, the researchers discovered that the lamprey hindbrain was not only segmented during development but the process also involved Hox genes—just like in its jawed vertebrate cousins.

"When we started, we thought that the situation was different, and the Hox genes were not really integrated into the process of segmentation as they are in jawed vertebrates," Parker says. "But in actually doing this project, we discovered the way that lamprey Hox genes are expressed and regulated is very similar to what we see in jawed vertebrates." This means that hindbrain segmentation—and the role of Hox genes in this segmentation—happened earlier on in evolution than was once thought, he says.

Parker, who has been spending his summers at Caltech studying lampreys since 2008, is next hoping to pinpoint other aspects of the lamprey hindbrain that may be conserved in modern vertebrates—information that will help contribute to a fundamental understanding of vertebrate development. And although those investigations will probably mean following the lamprey for a few more summers at Caltech, Parker says his time in the lamprey facility continually offers a one-of-a-kind experience.

"The lamprey system here is unique in the world—and it's not just the water tanks and how we've learned to maintain the animals. It's the small nucleus of people who have particular skills, people who come in from all over the world to work together, share protocols, and develop the field together," he says. "That's one of the things I've liked ever since I first came here. I really felt like I was a part of something very special.

These results were published in a paper titled "A Hox regulatory network of hindbrain segmentation is conserved to the base of vertebrates." Robb Krumlauf, a scientific director at the Stower's Institute and professor at the Kansas University Medical Center, was also a coauthor on the study. The Zebrafish/Xenopus/Lamprey facility at Caltech is a Beckman Institute facility.

Exclude from News Hub: 

Ceramics Don't Have To Be Brittle

Caltech Materials Scientists Are Creating Materials By Design

Imagine a balloon that could float without using any lighter-than-air gas. Instead, it could simply have all of its air sucked out while maintaining its filled shape. Such a vacuum balloon, which could help ease the world's current shortage of helium, can only be made if a new material existed that was strong enough to sustain the pressure generated by forcing out all that air while still being lightweight and flexible.

Caltech materials scientist Julia Greer and her colleagues are on the path to developing such a material and many others that possess unheard-of combinations of properties. For example, they might create a material that is thermally insulating but also extremely lightweight, or one that is simultaneously strong, lightweight, and nonbreakable—properties that are generally thought to be mutually exclusive.

Greer's team has developed a method for constructing new structural materials by taking advantage of the unusual properties that solids can have at the nanometer scale, where features are measured in billionths of meters. In a paper published in the September 12 issue of the journal Science, the Caltech researchers explain how they used the method to produce a ceramic (e.g., a piece of chalk or a brick) that contains about 99.9 percent air yet is incredibly strong, and that can recover its original shape after being smashed by more than 50 percent.

"Ceramics have always been thought to be heavy and brittle," says Greer, a professor of materials science and mechanics in the Division of Engineering and Applied Science at Caltech. "We're showing that in fact, they don't have to be either. This very clearly demonstrates that if you use the concept of the nanoscale to create structures and then use those nanostructures like LEGO to construct larger materials, you can obtain nearly any set of properties you want. You can create materials by design."

The researchers use a direct laser writing method called two-photon lithography to "write" a three-dimensional pattern in a polymer by allowing a laser beam to crosslink and harden the polymer wherever it is focused. The parts of the polymer that were exposed to the laser remain intact while the rest is dissolved away, revealing a three-dimensional scaffold. That structure can then be coated with a thin layer of just about any kind of material—a metal, an alloy, a glass, a semiconductor, etc. Then the researchers use another method to etch out the polymer from within the structure, leaving a hollow architecture.

The applications of this technique are practically limitless, Greer says. Since pretty much any material can be deposited on the scaffolds, the method could be particularly useful for applications in optics, energy efficiency, and biomedicine. For example, it could be used to reproduce complex structures such as bone, producing a scaffold out of biocompatible materials on which cells could proliferate.

In the latest work, Greer and her students used the technique to produce what they call three-dimensional nanolattices that are formed by a repeating nanoscale pattern. After the patterning step, they coated the polymer scaffold with a ceramic called alumina (i.e., aluminum oxide), producing hollow-tube alumina structures with walls ranging in thickness from 5 to 60 nanometers and tubes from 450 to 1,380 nanometers in diameter.

Greer's team next wanted to test the mechanical properties of the various nanolattices they created. Using two different devices for poking and prodding materials on the nanoscale, they squished, stretched, and otherwise tried to deform the samples to see how they held up.

They found that the alumina structures with a wall thickness of 50 nanometers and a tube diameter of about 1 micron shattered when compressed. That was not surprising given that ceramics, especially those that are porous, are brittle. However, compressing lattices with a lower ratio of wall thickness to tube diameter—where the wall thickness was only 10 nanometers—produced a very different result.

"You deform it, and all of a sudden, it springs back," Greer says. "In some cases, we were able to deform these samples by as much as 85 percent, and they could still recover."

To understand why, consider that most brittle materials such as ceramics, silicon, and glass shatter because they are filled with flaws—imperfections such as small voids and inclusions. The more perfect the material, the less likely you are to find a weak spot where it will fail. Therefore, the researchers hypothesize, when you reduce these structures down to the point where individual walls are only 10 nanometers thick, both the number of flaws and the size of any flaws are kept to a minimum, making the whole structure much less likely to fail.

"One of the benefits of using nanolattices is that you significantly improve the quality of the material because you're using such small dimensions," Greer says. "It's basically as close to an ideal material as you can get, and you get the added benefit of needing only a very small amount of material in making them."

The Greer lab is now aggressively pursuing various ways of scaling up the production of these so-called meta-materials.

The lead author on the paper, "Strong, Lightweight and Recoverable Three-Dimensional Ceramic Nanolattices," is Lucas R. Meza, a graduate student in Greer's lab. Satyajit Das, who was a visiting student researcher at Caltech, is also a coauthor. The work was supported by funding from the Defense Advanced Research Projects Agency and the Institute for Collaborative Biotechnologies. Greer is also on the board of directors of the Kavli Nanoscience Institute at Caltech.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Tipping the Balance of Behavior

Humans with autism often show a reduced frequency of social interactions and an increased tendency to engage in repetitive solitary behaviors. Autism has also been linked to dysfunction of the amygdala, a brain structure involved in processing emotions. Now Caltech researchers have discovered antagonistic neuron populations in the mouse amygdala that control whether the animal engages in social behaviors or asocial repetitive self-grooming. This discovery may have implications for understanding neural circuit dysfunctions that underlie autism in humans.

This discovery, which is like a "seesaw circuit," was led by postdoctoral scholar Weizhe Hong in the laboratory of David J. Anderson, the Seymour Benzer Professor of Biology at Caltech and an investigator with the Howard Hughes Medical Institute. The work was published online on September 11 in the journal Cell

"We know that there is some hierarchy of behaviors, and they interact with each other because the animal can't exhibit both social and asocial behaviors at the same time. In this study, we wanted to figure out how the brain does that," Anderson says.

Anderson and his colleagues discovered two intermingled but distinct populations of neurons in the amygdala, a part of the brain that is involved in innate social behaviors. One population promotes social behaviors, such as mating, fighting, or social grooming, while the other population controls repetitive self-grooming—an asocial behavior.

Interestingly, these two populations are distinguished according to the most fundamental subdivision of neuron subtypes in the brain: the "social neurons" are inhibitory neurons (which release the neurotransmitter GABA, or gamma-aminobutyric acid), while the "self-grooming neurons" are excitatory neurons (which release the neurotransmitter glutamate, an amino acid).

To study the relationship between these two cell types and their associated behaviors, the researchers used a technique called optogenetics. In optogenetics, neurons are genetically altered so that they express light-sensitive proteins from microbial organisms. Then, by shining a light on these modified neurons via a tiny fiber optic cable inserted into the brain, researchers can control the activity of the cells as well as their associated behaviors.

Using this optogenetic approach, Anderson's team was able to selectively switch on the neurons associated with social behaviors and those linked with asocial behaviors.

With the social neurons, the behavior that was elicited depended upon the intensity of the light signal. That is, when high-intensity light was used, the mice became aggressive in the presence of an intruder mouse. When lower-intensity light was used, the mice no longer attacked, although they were still socially engaged with the intruder—either initiating mating behavior or attempting to engage in social grooming.

When the neurons associated with asocial behavior were turned on, the mouse began self-grooming behaviors such as paw licking and face grooming while completely ignoring all intruders. The self-grooming behavior was repetitive and lasted for minutes even after the light was turned off.

The researchers could also use the light-activated neurons to stop the mice from engaging in particular behaviors. For example, if a lone mouse began spontaneously self-grooming, the researchers could halt this behavior through the optogenetic activation of the social neurons. Once the light was turned off and the activation stopped, the mouse would return to its self-grooming behavior.

Surprisingly, these two groups of neurons appear to interfere with each other's function: the activation of social neurons inhibits self-grooming behavior, while the activation of self-grooming neurons inhibits social behavior. Thus these two groups of neurons seem to function like a seesaw, one that controls whether mice interact with others or instead focus on themselves. It was completely unexpected that the two groups of neurons could be distinguished by whether they were excitatory or inhibitory. "If there was ever an experiment that 'carves nature at its joints,'" says Anderson, "this is it."

This seesaw circuit, Anderson and his colleagues say, may have some relevance to human behavioral disorders such as autism.

"In autism," Anderson says, "there is a decrease in social interactions, and there is often an increase in repetitive, sometimes asocial or self-oriented, behaviors"—a phenomenon known as perseveration. "Here, by stimulating a particular set of neurons, we are both inhibiting social interactions and promoting these perseverative, persistent behaviors."

Studies from other laboratories have shown that disruptions in genes implicated in autism show a similar decrease in social interaction and increase in repetitive self-grooming behavior in mice, Anderson says. However, the current study helps to provide a needed link between gene activity, brain activity, and social behaviors, "and if you don't understand the circuitry, you are never going to understand how the gene mutation affects the behavior." Going forward, he says, such a complete understanding will be necessary for the development of future therapies.

But could this concept ever actually be used to modify a human behavior?

"All of this is very far away, but if you found the right population of neurons, it might be possible to override the genetic component of a behavioral disorder like autism, by just changing the activity of the circuits—tipping the balance of the see-saw in the other direction," he says.

The work was funded by the Simons Foundation, the National Institutes of Health and the Howard Hughes Medical Institute. Caltech coauthors on the paper include Hong, who was the lead author, and graduate student Dong-Wook Kim.

Exclude from News Hub: 

Textbook Theory Behind Volcanoes May Be Wrong

In the typical textbook picture, volcanoes, such as those that are forming the Hawaiian islands, erupt when magma gushes out as narrow jets from deep inside Earth. But that picture is wrong, according to a new study from researchers at Caltech and the University of Miami in Florida.

New seismology data are now confirming that such narrow jets don't actually exist, says Don Anderson, the Eleanor and John R. McMillian Professor of Geophysics, Emeritus, at Caltech. In fact, he adds, basic physics doesn't support the presence of these jets, called mantle plumes, and the new results corroborate those fundamental ideas.

"Mantle plumes have never had a sound physical or logical basis," Anderson says. "They are akin to Rudyard Kipling's 'Just So Stories' about how giraffes got their long necks."

Anderson and James Natland, a professor emeritus of marine geology and geophysics at the University of Miami, describe their analysis online in the September 8 issue of the Proceedings of the National Academy of Sciences.

According to current mantle-plume theory, Anderson explains, heat from Earth's core somehow generates narrow jets of hot magma that gush through the mantle and to the surface. The jets act as pipes that transfer heat from the core, and how exactly they're created isn't clear, he says. But they have been assumed to exist, originating near where the Earth's core meets the mantle, almost 3,000 kilometers underground—nearly halfway to the planet's center. The jets are theorized to be no more than about 300 kilometers wide, and when they reach the surface, they produce hot spots.  

While the top of the mantle is a sort of fluid sludge, the uppermost layer is rigid rock, broken up into plates that float on the magma-bearing layers. Magma from the mantle beneath the plates bursts through the plate to create volcanoes. As the plates drift across the hot spots, a chain of volcanoes forms—such as the island chains of Hawaii and Samoa.

"Much of solid-Earth science for the past 20 years—and large amounts of money—have been spent looking for elusive narrow mantle plumes that wind their way upward through the mantle," Anderson says.

To look for the hypothetical plumes, researchers analyze global seismic activity. Everything from big quakes to tiny tremors sends seismic waves echoing through Earth's interior. The type of material that the waves pass through influences the properties of those waves, such as their speeds. By measuring those waves using hundreds of seismic stations installed on the surface, near places such as Hawaii, Iceland, and Yellowstone National Park, researchers can deduce whether there are narrow mantle plumes or whether volcanoes are simply created from magma that's absorbed in the sponge-like shallower mantle.

No one has been able to detect the predicted narrow plumes, although the evidence has not been conclusive. The jets could have simply been too thin to be seen, Anderson says. Very broad features beneath the surface have been interpreted as plumes or super-plumes, but, still, they're far too wide to be considered narrow jets.

But now, thanks in part to more seismic stations spaced closer together and improved theory, analysis of the planet's seismology is good enough to confirm that there are no narrow mantle plumes, Anderson and Natland say. Instead, data reveal that there are large, slow, upward-moving chunks of mantle a thousand kilometers wide.

In the mantle-plume theory, Anderson explains, the heat that is transferred upward via jets is balanced by the slower downward motion of cooled, broad, uniform chunks of mantle. The behavior is similar to that of a lava lamp, in which blobs of wax are heated from below and then rise before cooling and falling. But a fundamental problem with this picture is that lava lamps require electricity, he says, and that is an outside energy source that an isolated planet like Earth does not have.  

The new measurements suggest that what is really happening is just the opposite: Instead of narrow jets, there are broad upwellings, which are balanced by narrow channels of sinking material called slabs. What is driving this motion is not heat from the core, but cooling at Earth's surface. In fact, Anderson says, the behavior is the regular mantle convection first proposed more than a century ago by Lord Kelvin. When material in the planet's crust cools, it sinks, displacing material deeper in the mantle and forcing it upward.

"What's new is incredibly simple: upwellings in the mantle are thousands of kilometers across," Anderson says. The formation of volcanoes then follows from plate tectonics—the theory of how Earth's plates move and behave. Magma, which is less dense than the surrounding mantle, rises until it reaches the bottom of the plates or fissures that run through them. Stresses in the plates, cracks, and other tectonic forces can squeeze the magma out, like how water is squeezed out of a sponge. That magma then erupts out of the surface as volcanoes. The magma comes from within the upper 200 kilometers of the mantle and not thousands of kilometers deep, as the mantle-plume theory suggests.

"This is a simple demonstration that volcanoes are the result of normal broad-scale convection and plate tectonics," Anderson says. He calls this theory "top-down tectonics," based on Kelvin's initial principles of mantle convection. In this picture, the engine behind Earth's interior processes is not heat from the core but cooling at the planet's surface. This cooling and plate tectonics drives mantle convection, the cooling of the core, and Earth's magnetic field. Volcanoes and cracks in the plate are simply side effects.

The results also have an important consequence for rock compositions—notably the ratios of certain isotopes, Natland says. According to the mantle-plume idea, the measured compositions derive from the mixing of material from reservoirs separated by thousands of kilometers in the upper and lower mantle. But if there are no mantle plumes, then all of that mixing must have happened within the upwellings and nearby mantle in Earth's top 1,000 kilometers.

The paper is titled "Mantle updrafts and mechanisms of oceanic volcanism."

Exclude from News Hub: 
News Type: 
Research News

Seeing Protein Synthesis in the Field

Caltech researchers have developed a novel way to visualize proteins generated by microorganisms in their natural environment—including the murky waters of Caltech's lily pond, as in this image created by Professor of Geobiology Victoria Orphan and her colleagues. The method could give scientists insights to how uncultured microbes (organisms that may not easily be grown in the lab) react and adapt to environmental stimuli over space and time.

The visualization technique, dubbed BONCAT (for "bioorthogonal non-canonical amino-acid tagging"), was developed by David Tirrell, Caltech's Ross McCollum–William H. Corcoran Professor and professor of chemistry and chemical engineering. BONCAT uses "non-canonical" amino acids—synthetic molecules that do not normally occur in proteins found in nature and that carry particular chemical tags that can attach (or "click") onto a fluorescent dye. When these artificial amino acids are incubated with environmental samples, like lily-pond water, they are taken up by microorganisms and incorporated into newly formed proteins. Adding the fluorescent dye to the mix allows these proteins to be visualized within the cell.

For example, in the image, the entire microbial community in the pond water is stained blue with a DNA dye; freshwater gammaproteobacteria are labeled with a fluorescently tagged short-chain ribosomal RNA probe, in red; and newly created proteins are dyed green by BONCAT. The cells colored green and orange in the composite image, then, show those bacteria—gammaproteobacteria and other rod-shaped cells—that are actively making proteins.

"You could apply BONCAT to almost any type of sample," Orphan says. "When you have an environmental sample, you don't know which microorganisms are active. So, assume you're interested in looking at organisms that respond to methane. You could take a sample, provide methane, add the synthetic amino acid, and ask which cells over time showed activity—made new proteins—in the presence of methane relative to samples without methane. Then you can start to sort those organisms out, and possibly use this to determine protein turnover times. These questions are not typically tractable with uncultured organisms in the environment." Orphan's lab is also now using BONCAT on samples of deep-sea sediment in which mixed groups of bacteria and archaea catalyze the anaerobic oxidation of methane.

Why sample the Caltech lily pond? Roland Hatzenpichler, a postdoctoral scholar in Orphan's lab, explains: "When I started applying BONCAT on environmental samples, I wanted to try this new approach on samples that are both interesting from a microbiological standpoint, as well as easily accessible. Samples from the lily pond fit those criteria." Hatzenpichler is lead author of a study describing BONCAT that appeared as the cover story of the August issue of the journal Environmental Microbiology.

The work is supported by the Gordon and Betty Moore Foundation Marine Microbiology Initiative.

Exclude from News Hub: 
News Type: 
Research News

Programmed to Fold: RNA Origami

Researchers from Aarhus University in Denmark and Caltech have developed a new method for organizing molecules on the nanoscale. Inspired by techniques used for folding DNA origami—first invented by Paul Rothemund, a senior research associate in computation and neural systems in the Division of Engineering and Applied Science at Caltech—the team, which includes Rothemund, has fabricated complicated shapes from DNA's close chemical cousin, RNA.

Unlike DNA origami, whose components are chemically synthesized and then folded in an artificial heating and cooling process, RNA origami are synthesized enzymatically and fold up as they are being synthesized, which takes place under more natural conditions compatible with living cells. These features of RNA origami may allow designer RNA structures to be grown within living cells, where they might be used to organize cellular enzymes into biochemical factories.

"The parts for a DNA origami cannot easily be written into the genome of an organism. An RNA origami, on the other hand, can be represented as a DNA gene, which in cells is transcribed into RNA by a protein machine called RNA polymerase," explains Rothemund.

So far, the researchers have demonstrated their method by designing RNA molecules that fold into rectangles and then further assemble themselves into larger honeycomb patterns. This approach was taken to make the shapes recognizable using an atomic force microscope, but many other shapes should be realizable.

A paper describing the research appears in the August 15 issue of the journal Science.

"What is unique about the method is that the folding recipe is encoded into the molecule itself, through its sequence." explains first author Cody Geary, a postdoctoral scholar at Aarhus University.

In other words, the sequence of the RNAs defines both the final shape, and the order in which different parts of the shape fold. The particular RNA sequences that were folded in the experiment were designed using software called NUPACK, created in the laboratory of Caltech professor Niles Pierce. Both the Rothemund and Pierce labs are funded by a National Science Foundation Molecular Programming Project (MPP) Expeditions in Computing grant.

"Our latest research is an excellent example of how tools developed by one part of the MPP are being used by another," says Rothemund.

"RNA has a richer structural and functional repertoire than DNA, and so I am especially interested in how complex biological motifs with special 3-D geometries or protein-binding regions can be added to the basic architecture of RNA origami," says Geary, who completed his BS in chemistry at Caltech in 2003.

The project began with an extended visit by Geary and corresponding author Ebbe Andersen, also from Aarhus University, to Rothemund's Caltech lab.

"RNA origami is still in its infancy," says Rothemund. "Nevertheless, I believe that RNA origami, because of their potential to be manufactured by cells, and because of the extra functionality possible with RNA, will have at least as big an impact as DNA origami."

Rothemund (BS '94) reported the original method for DNA origami in 2006 in the journal Nature. Since then, the work has been cited over 2,000 times and DNA origami have been made in over 50 labs worldwide for potential applications such as drug delivery vehicles and molecular computing.

"The payoff is that unlike DNA origami, which are expensive and have to be made outside of cells, RNA origami should be able to be grown cheaply in large quantities, simply by growing bacteria with genes for them," he adds. "Genes and bacteria cost essentially nothing to share, and so RNA origami will be easily exchanged between scientists."


Katie Neith
Exclude from News Hub: 
News Type: 
Research News

Study of Aerosols Stands to Improve Climate Models

Aerosols, tiny particles in the atmosphere, play a significant role in Earth's climate, scattering and absorbing incoming sunlight and affecting the formation and properties of clouds. Currently, the effect that these aerosols have on clouds represents the largest uncertainty among all influences on climate change.

But now researchers from Caltech and the Jet Propulsion Laboratory have provided a global observational study of the effect that changes in aerosol levels have on low-level marine clouds—the clouds that have the largest impact on the amount of incoming sunlight that Earth reflects back into space. The findings appear in the advance online version of the journal Nature Geoscience.

Changes in aerosol levels have two main effects—they alter the amount of clouds in the atmosphere and they change the internal properties of those clouds. Using measurements from several of NASA's Earth-monitoring satellites from August 2006 through April 2011, the researchers quantified for the first time these two effects from 7.3 million individual data points.

"If you combine these two effects, you get an aerosol influence almost twice that estimated in the latest report from the Intergovernmental Panel on Climate Change," says John Seinfeld, the Louis E. Nohl Professor and professor of chemical engineering at Caltech. "These results offer unique guidance on how warm cloud processes should be incorporated in climate models with changing aerosol levels."

The lead author of the paper, "Satellite-based estimate of global aerosol-cloud radiative forcing by marine warm clouds," is Yi-Chun Chen (Ph.D. '13), a NASA postdoctoral fellow at JPL. Additional coauthors are Matthew W. Christensen of JPL and Colorado State University and Graeme L. Stephens, director of the Center for Climate Sciences at JPL. The work was supported by funding from NASA and the Office of Naval Research.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Looking Forward to 2020 . . . on Mars

A Q&A With Project Scientist Ken Farley

While the Curiosity rover continues to interrogate Gale Crater on Mars, planning is well under way for its successor—another rover that is currently referred to as Mars 2020. The new robotic explorer, scheduled to launch in 2020, will use much of the same technology (even some of the spare parts Curiosity left behind on Earth) to get to the Red Planet. Once there, it will pursue a new set of scientific objectives including the careful collection and storage (referred to as "caching") of compelling samples that might one day be returned to Earth by a future mission. Today, NASA announced the selection of seven scientific instruments that Mars 2020 will carry with it to Mars.

Ken Farley, Caltech's W.M. Keck Foundation Professor of Geochemistry and chair of the Division of Geological and Planetary Sciences, is serving as project scientist for Mars 2020. We recently sat down with him to talk about the mission and his new role.


Congratulations on being selected project scientist for this exciting mission. For those of us who do not know exactly what a project scientist does, can you give us a little overview of the job?

Sure. Conveniently, NASA has a definition, which says that the project scientist is responsible for the overall scientific success of the mission. That's a pretty concise explanation, but it encompasses a lot. My main duty thus far has been helping to define the science needs for equipment that we are going to send to Mars. So while we haven't actually done any science yet, we have had to make a lot of design decisions that are related to the science.

The easiest place to illustrate this is in the discussion of what is necessary, from the science point of view, in terms of the samples that we will cache. We have to consider things like how much mass we need to bring back, what kind of magnetic fields and temperatures the samples are going to be exposed to, and how much contamination of different chemical constituents we can allow. Every one of those questions drives a design decision in how you build the drilling system and the caching system. And if you get those wrong, there's nothing you can do. So there's a lot of thought that has to be put into that, and I convey a lot of that information to the engineers.

Now that we have a science team, I will be helping to facilitate all of its investigations and helping the members to work as a team. MSL [the Mars Science Laboratory, Curiosity's mission] is demonstrating how you have to operate when you have a complex tool (a rover) and a bunch of sensors, and every day you have to figure out what you're going to do to further science. The team has to pull together, pool all of its information, and come up with a plan, so an important part of my job will be figuring out how to manage the team dynamics to keep everybody moving forward and not fragmenting.


What aspects of the job were particularly appealing to you?

One of the parts of being a division chair that I have really enjoyed is being engaged with something that's bigger than my own research. And there's definitely a lot of that on 2020. It's a huge undertaking. There are not many science projects of this scale to be associated closely with, so this just seemed like a really good opportunity.

The kinds of questions that 2020 is going after—they're really big questions. You could never answer them on your own. The key objective is about life—is there or was there ever life on Mars, and more broadly what does its presence or absence mean about the frequency and evolution of life within the universe? There's no way you could answer these questions on Earth. The simple reason for that is that Mars is covered by rocks that are of the era in which, at least on our planet, we believe life was evolving. There are almost no rocks left of that age on the earth, and the ones that are left have been really badly beaten up. So Mars is a place where you really stand a chance of answering these questions in a way that you probably can't anywhere else.

It's not the kind of science I'm usually associated with, but the mission is trying to address truly profound scientific questions.


As you said, space has not been the focus of your research for most of your career. Can you talk a bit about how a terrestrial geochemist like yourself wound up in this role on a Mars mission?

Several years ago, I participated in a workshop about quantifying martian stratigraphy, which was hosted by the Keck Institute for Space Studies [KISS]. One of the topics that was discussed was geochronology—the dating of rocks and other materials—on other planetary bodies, like Mars. This is important for establishing the history of a planet and is particularly challenging because it requires such exacting measurements. After interacting with some people who are now my JPL collaborators at the workshop, it seemed like we might be able to do something special that would help solve this problem. And we got support from KISS to do a follow-on study.

As I was getting deeper and deeper into thinking about how we could do this on Mars, John Grotzinger (the Fletcher Jones Professor of Geology at Caltech and project scientist for MSL) was conducting the landing-site workshops for MSL. He would say things like, "Oh, it would be really great if we could date this." And we'd agree. Then there was a call for participating scientists on MSL. I had no background whatsoever in this, but I knew there was a mass spectrometer on Curiosity. That's one of the analytical instruments we need to make these dating measurements because it allows us to determine the relative abundances of various isotopes in a sample. Since those isotopes are produced at known rates, their abundances tell us something about the age of the sample. So I wrote a proposal basically saying let's see if we can make Curiosity's mass spectrometer work for this purpose. And it did.


What do you think led to your selection as project scientist?

Although I don't have a long track record in studying Mars, this mission is possibly the first step in bringing samples back to Earth. In order to do that, you have to answer a lot of questions related to geochemistry, which is my specialty. The geochemistry community is not ordinarily thinking about rocks coming back from Mars. I happen to have enough crossover between what I know about Mars from the work I just described and my background from working in geochemistry labs, especially those working with the type of very small samples we might get back from Mars, to be a good fit.


Given Curiosity's success on Mars, why is it important and exciting for us to be sending another rover to the Red Planet?

One thing to realize is that the surface of Mars is more or less equivalent in size to the entire continental surface area of the earth, and we've been to just a few points. It's naturally tempting to look at the few places we have been on Mars and draw grand conclusions from them, but you could imagine if you landed in the middle of the Sahara Desert and studied the earth, you would come up with different answers than if you landed in the Amazon, for example. So that's part of it.

But the big thing that distinguishes Mars 2020 is the fact that we are preparing this cache, which is the first step in a process that will hopefully bring samples back to Earth some day. It's very clear that from the science community's point of view, this is a critical motivation for this mission.


How has the experience been working on the mission thus far?

I enjoy it very much. It's extremely different to go from a lab group of two or three people to a project that, at the end of the day, is going to have spent $1.5 billion over the next seven or eight years. It's a completely different scale of operation.

I find it really fascinating to see how everything works. I've spent my entire career among scientists. Suddenly transitioning and working with engineers is interesting because their approach and style is completely different. But they're all extremely good at what they do.

It's a lot of fun to work with these people and to face completely new and unexpected challenges. You never know what new thing is going to pop up.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Biology Made Simpler With "Clear" Tissues

In general, our knowledge of biology—and much of science in general—is limited by our ability to actually see things. Researchers who study developmental problems and disease, in particular, are often limited by their inability to look inside an organism to figure out exactly what went wrong and when.

Now, thanks to techniques developed at Caltech, scientists can see through tissues, organs, and even an entire body. The techniques offer new insight into the cell-by-cell makeup of organisms—and the promise of novel diagnostic medical applications.

"Large volumes of tissue are not optically transparent—you can't see through them," says Viviana Gradinaru (BS '05), an assistant professor of biology at Caltech and the principal investigator whose team has developed the new techniques, which are explained in a paper appearing in the journal Cell. Lipids throughout cells provide structural support, but they also prevent light from passing through the cells. "So, if we need to see individual cells within a large volume of tissue"—within a mouse kidney, for example, or a human tumor biopsy—"we have to slice the tissue very thin, separately image each slice with a microscope, and put all of the images back together with a computer. It's a very time-consuming process and it is error prone, especially if you look to map long axons or sparse cell populations such as stem cells or tumor cells," she says.

The researchers came up with a way to circumvent this long process by making an organism's entire body clear, so that it can be peered through—in 3-D—using standard optical methods such as confocal microscopy.

The new approach builds off a technique known as CLARITY that was previously developed by Gradinaru and her collaborators to create a transparent whole-brain specimen. With the CLARITY method, a rodent brain is infused with a solution of lipid-dissolving detergents and hydrogel—a water-based polymer gel that provides structural support—thus "clearing" the tissue but leaving its three-dimensional architecture intact for study.

The refined technique optimizes the CLARITY concept so that it can be used to clear other organs besides the brain, and even whole organisms. By making clever use of an organism's own network of blood vessels, Gradinaru and her colleagues—including scientific researcher Bin Yang and postdoctoral scholar Jennifer Treweek, coauthors on the paper—can quickly deliver the lipid-dissolving hydrogel and chemical solution throughout the body.

Gradinaru and her colleagues have dubbed this new technique PARS, or perfusion-assisted agent release in situ.

Once an organ or whole body has been made transparent, standard microscopy techniques can be used to easily look through a thick mass of tissue to view single cells that are genetically marked with fluorescent proteins. Even without such genetically introduced fluorescent proteins, however, the PARS technique can be used to deliver stains and dyes to individual cell types of interest. When whole-body clearing is not necessary the method works just as well on individual organs by using a technique called PACT, short for passive clarity technique.

To find out if stripping the lipids from cells also removes other potential molecules of interest—such as proteins, DNA, and RNA—Gradinaru and her team collaborated with Long Cai, an assistant professor of chemistry at Caltech, and his lab. The two groups found that strands of RNA are indeed still present and can be detected with single-molecule resolution in the cells of the transparent organisms.

The Cell paper focuses on the use of PACT and PARS as research tools for studying disease and development in research organisms. However, Gradinaru and her UCLA collaborator Rajan Kulkarni, have already found a diagnostic medical application for the methods. Using the techniques on a biopsy from a human skin tumor, the researchers were able to view the distribution of individual tumor cells within a tissue mass. In the future, Gradinaru says, the methods could be used in the clinic for the rapid detection of cancer cells in biopsy samples.

The ability to make an entire organism transparent while retaining its structural and genetic integrity has broad-ranging applications, Gradinaru says. For example, the neurons of the peripheral nervous system could be mapped throughout a whole body, as could the distribution of viruses, such as HIV, in an animal model.

Gradinaru also leads Caltech's Beckman Institute BIONIC center for optogenetics and tissue clearing and plans to offer training sessions to researchers interested in learning how to use PACT and PARS in their own labs.

"I think these new techniques are very practical for many fields in biology," she says. "When you can just look through an organism for the exact cells or fine axons you want to see—without slicing and realigning individual sections—it frees up the time of the researcher. That means there is more time to the answer big questions, rather than spending time on menial jobs."

Exclude from News Hub: 
News Type: 
Research News

Future Electronics May Depend on Lasers, Not Quartz

Nearly all electronics require devices called oscillators that create precise frequencies—frequencies used to keep time in wristwatches or to transmit reliable signals to radios. For nearly 100 years, these oscillators have relied upon quartz crystals to provide a frequency reference, much like a tuning fork is used as a reference to tune a piano. However, future high-end navigation systems, radar systems, and even possibly tomorrow's consumer electronics will require references beyond the performance of quartz.

Now, researchers in the laboratory of Kerry Vahala, the Ted and Ginger Jenkins Professor of Information Science and Technology and Applied Physics at Caltech, have developed a method to stabilize microwave signals in the range of gigahertz, or billions of cycles per second—using a pair of laser beams as the reference, in lieu of a crystal.

Quartz crystals "tune" oscillators by vibrating at relatively low frequencies—those that fall at or below the range of megahertz, or millions of cycles per second, like radio waves. However, quartz crystals are so good at tuning these low frequencies that years ago, researchers were able to apply a technique called electrical frequency division that could convert higher-frequency microwave signals into lower-frequency signals, and then stabilize these with quartz. 

The new technique, which Vahala and his colleagues have dubbed electro-optical frequency division, builds off of the method of optical frequency division, developed at the National Institute of Standards and Technology more than a decade ago. "Our new method reverses the architecture used in standard crystal-stabilized microwave oscillators—the 'quartz' reference is replaced by optical signals much higher in frequency than the microwave signal to be stabilized," Vahala says.

Jiang Li—a Kavli Nanoscience Institute postdoctoral scholar at Caltech and one of two lead authors on the paper, along with graduate student Xu Yi—likens the method to a gear chain on a bicycle that translates pedaling motion from a small, fast-moving gear into the motion of a much larger wheel. "Electrical frequency dividers used widely in electronics can work at frequencies no higher than 50 to 100 GHz. Our new architecture is a hybrid electro-optical 'gear chain' that stabilizes a common microwave electrical oscillator with optical references at much higher frequencies in the range of terahertz or trillions of cycles per second," Li says.  

The optical reference used by the researchers is a laser that, to the naked eye, looks like a tiny disk. At only 6 mm in diameter, the device is very small, making it particularly useful in compact photonics devices—electronic-like devices powered by photons instead of electrons, says Scott Diddams, physicist and project leader at the National Institute of Standards and Technology and a coauthor on the study.

"There are always tradeoffs between the highest performance, the smallest size, and the best ease of integration. But even in this first demonstration, these optical oscillators have many advantages; they are on par with, and in some cases even better than, what is available with widespread electronic technology," Vahala says.

The new technique is described in a paper that will be published in the journal Science on July 18. Other authors on this paper include Hansuek Lee, who is a visiting associate at Caltech. The work was sponsored by the DARPA's ORCHID and PULSE programs; the Caltech Institute for Quantum Information and Matter (IQIM), an NSF Physics Frontiers Center with support of the Gordon and Betty Moore Foundation; and the Caltech Kavli NanoScience Institute.

Listing Title: 
Future Electronics May Depend on Lasers
Exclude from News Hub: 
News Type: 
Research News