Biology Made Simpler With "Clear" Tissues

In general, our knowledge of biology—and much of science in general—is limited by our ability to actually see things. Researchers who study developmental problems and disease, in particular, are often limited by their inability to look inside an organism to figure out exactly what went wrong and when.

Now, thanks to techniques developed at Caltech, scientists can see through tissues, organs, and even an entire body. The techniques offer new insight into the cell-by-cell makeup of organisms—and the promise of novel diagnostic medical applications.

"Large volumes of tissue are not optically transparent—you can't see through them," says Viviana Gradinaru (BS '05), an assistant professor of biology at Caltech and the principal investigator whose team has developed the new techniques, which are explained in a paper appearing in the journal Cell. Lipids throughout cells provide structural support, but they also prevent light from passing through the cells. "So, if we need to see individual cells within a large volume of tissue"—within a mouse kidney, for example, or a human tumor biopsy—"we have to slice the tissue very thin, separately image each slice with a microscope, and put all of the images back together with a computer. It's a very time-consuming process and it is error prone, especially if you look to map long axons or sparse cell populations such as stem cells or tumor cells," she says.

The researchers came up with a way to circumvent this long process by making an organism's entire body clear, so that it can be peered through—in 3-D—using standard optical methods such as confocal microscopy.

The new approach builds off a technique known as CLARITY that was previously developed by Gradinaru and her collaborators to create a transparent whole-brain specimen. With the CLARITY method, a rodent brain is infused with a solution of lipid-dissolving detergents and hydrogel—a water-based polymer gel that provides structural support—thus "clearing" the tissue but leaving its three-dimensional architecture intact for study.

The refined technique optimizes the CLARITY concept so that it can be used to clear other organs besides the brain, and even whole organisms. By making clever use of an organism's own network of blood vessels, Gradinaru and her colleagues—including scientific researcher Bin Yang and postdoctoral scholar Jennifer Treweek, coauthors on the paper—can quickly deliver the lipid-dissolving hydrogel and chemical solution throughout the body.

Gradinaru and her colleagues have dubbed this new technique PARS, or perfusion-assisted agent release in situ.

Once an organ or whole body has been made transparent, standard microscopy techniques can be used to easily look through a thick mass of tissue to view single cells that are genetically marked with fluorescent proteins. Even without such genetically introduced fluorescent proteins, however, the PARS technique can be used to deliver stains and dyes to individual cell types of interest. When whole-body clearing is not necessary the method works just as well on individual organs by using a technique called PACT, short for passive clarity technique.

To find out if stripping the lipids from cells also removes other potential molecules of interest—such as proteins, DNA, and RNA—Gradinaru and her team collaborated with Long Cai, an assistant professor of chemistry at Caltech, and his lab. The two groups found that strands of RNA are indeed still present and can be detected with single-molecule resolution in the cells of the transparent organisms.

The Cell paper focuses on the use of PACT and PARS as research tools for studying disease and development in research organisms. However, Gradinaru and her UCLA collaborator Rajan Kulkarni, have already found a diagnostic medical application for the methods. Using the techniques on a biopsy from a human skin tumor, the researchers were able to view the distribution of individual tumor cells within a tissue mass. In the future, Gradinaru says, the methods could be used in the clinic for the rapid detection of cancer cells in biopsy samples.

The ability to make an entire organism transparent while retaining its structural and genetic integrity has broad-ranging applications, Gradinaru says. For example, the neurons of the peripheral nervous system could be mapped throughout a whole body, as could the distribution of viruses, such as HIV, in an animal model.

Gradinaru also leads Caltech's Beckman Institute BIONIC center for optogenetics and tissue clearing and plans to offer training sessions to researchers interested in learning how to use PACT and PARS in their own labs.

"I think these new techniques are very practical for many fields in biology," she says. "When you can just look through an organism for the exact cells or fine axons you want to see—without slicing and realigning individual sections—it frees up the time of the researcher. That means there is more time to the answer big questions, rather than spending time on menial jobs."

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Future Electronics May Depend on Lasers, Not Quartz

Nearly all electronics require devices called oscillators that create precise frequencies—frequencies used to keep time in wristwatches or to transmit reliable signals to radios. For nearly 100 years, these oscillators have relied upon quartz crystals to provide a frequency reference, much like a tuning fork is used as a reference to tune a piano. However, future high-end navigation systems, radar systems, and even possibly tomorrow's consumer electronics will require references beyond the performance of quartz.

Now, researchers in the laboratory of Kerry Vahala, the Ted and Ginger Jenkins Professor of Information Science and Technology and Applied Physics at Caltech, have developed a method to stabilize microwave signals in the range of gigahertz, or billions of cycles per second—using a pair of laser beams as the reference, in lieu of a crystal.

Quartz crystals "tune" oscillators by vibrating at relatively low frequencies—those that fall at or below the range of megahertz, or millions of cycles per second, like radio waves. However, quartz crystals are so good at tuning these low frequencies that years ago, researchers were able to apply a technique called electrical frequency division that could convert higher-frequency microwave signals into lower-frequency signals, and then stabilize these with quartz. 

The new technique, which Vahala and his colleagues have dubbed electro-optical frequency division, builds off of the method of optical frequency division, developed at the National Institute of Standards and Technology more than a decade ago. "Our new method reverses the architecture used in standard crystal-stabilized microwave oscillators—the 'quartz' reference is replaced by optical signals much higher in frequency than the microwave signal to be stabilized," Vahala says.

Jiang Li—a Kavli Nanoscience Institute postdoctoral scholar at Caltech and one of two lead authors on the paper, along with graduate student Xu Yi—likens the method to a gear chain on a bicycle that translates pedaling motion from a small, fast-moving gear into the motion of a much larger wheel. "Electrical frequency dividers used widely in electronics can work at frequencies no higher than 50 to 100 GHz. Our new architecture is a hybrid electro-optical 'gear chain' that stabilizes a common microwave electrical oscillator with optical references at much higher frequencies in the range of terahertz or trillions of cycles per second," Li says.  

The optical reference used by the researchers is a laser that, to the naked eye, looks like a tiny disk. At only 6 mm in diameter, the device is very small, making it particularly useful in compact photonics devices—electronic-like devices powered by photons instead of electrons, says Scott Diddams, physicist and project leader at the National Institute of Standards and Technology and a coauthor on the study.

"There are always tradeoffs between the highest performance, the smallest size, and the best ease of integration. But even in this first demonstration, these optical oscillators have many advantages; they are on par with, and in some cases even better than, what is available with widespread electronic technology," Vahala says.

The new technique is described in a paper that will be published in the journal Science on July 18. Other authors on this paper include Hansuek Lee, who is a visiting associate at Caltech. The work was sponsored by the DARPA's ORCHID and PULSE programs; the Caltech Institute for Quantum Information and Matter (IQIM), an NSF Physics Frontiers Center with support of the Gordon and Betty Moore Foundation; and the Caltech Kavli NanoScience Institute.

Listing Title: 
Future Electronics May Depend on Lasers
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Corals Provide Clues for Climate Change Research

Just as growth rings can offer insight into climate changes occurring during the lifespan of a tree, corals have much to tell about changes in the ocean. At Caltech, climate scientists Jess F. Adkins and Nivedita Thiagarajan use manned submersibles, like Alvin operated by the Woods Hole Oceanographic Institution, to dive thousands of meters below the surface to collect these specimens—and to shed new light on the connection between variance in carbon dioxide (CO2) levels in the deep ocean and historical glacial cycles.

A paper describing the research appears in the July 3 issue of Nature.

It has long been known that ice sheets wax and wane as the concentration of CO2 decreases and increases in the atmosphere. Adkins and his team believe that the deep ocean—which stores 60 times more inorganic sources of carbon than is found in the atmosphere—must play a vital role in this variance.

To investigate this, the researchers analyzed the calcium carbonate skeletons of corals collected from deep in the North Atlantic Ocean. The corals were built up from 11,000–18,000 years ago out of CO2 dissolved in the ocean.

"We used a new technique that has been developed at Caltech, called clumped isotope thermometry, to determine what the temperature of the ocean was in the location where the coral grew," says Thiagarajan, the Dreyfus Postdoctoral Scholar in Geochemistry at Caltech and lead author of the paper. "We also used radiocarbon dating and uranium-series dating to estimate the deep-ocean ventilation rate during this time period." 

The researchers found that the deep ocean started warming before the start of a rapid climate change event about 14,600 years ago in which the last glacial period—or most recent time period when ice sheets covered a large portion of Earth—was in the final stages of transitioning to the current interglacial period.

"We found that a warm-water-under-cold-water scenario developed around 800 years before the largest signal of warming in the Greenland ice cores, called the 'Bølling–Allerød,'" explains Adkins. "CO2 had already been rising in the atmosphere by this time, but we see the deep-ocean reorganization brought on by the potential energy release to be the pivot point for the system to switch from a glacial state, where the deep ocean can hold onto CO2, and an interglacial state, where it lets out CO2."  

"Studying Earth's climate in the past helps us understand how different parts of the climate system interact with each other," says Thiagarajan. "Figuring out these underlying mechanisms will help us predict how climate will change in the future." 

Additional authors on the Nature paper, "Abrupt pre-Bølling–Allerød warming and circulation changes in the deep ocean," are geochemist John M. Eiler and graduate student Adam V. Subhas from Caltech, and John R. Southon from UC Irvine. 

Writer: 
Katie Neith
Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Neuroeconomists Confirm Warren Buffett's Wisdom

Brain Research Suggests an Early Warning Signal Tips Off Smart Traders

Investment magnate Warren Buffett has famously suggested that investors should try to "be fearful when others are greedy and be greedy only when others are fearful."

That turns out to be excellent advice, according to the results of a new study by researchers at Caltech and Virginia Tech that looked at the brain activity and behavior of people trading in experimental markets where price bubbles formed. In such markets, where price far outpaces actual value, it appears that wise traders receive an early warning signal from their brains—a warning that makes them feel uncomfortable and urges them to sell, sell, sell.

"Seeing what's going on in people's brains when they are trading suggests that Buffett was right on target," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at Caltech.  

That is because in their experimental markets, Camerer and his colleagues found two distinct types of activity in the brains of participants—one that made a small fraction of participants nervous and prompted them to sell their experimental shares even as prices were on the rise, and another that was much more common and made traders behave in a greedy way, buying aggressively during the bubble and even after the peak. The lucky few who received the early warning signal got out of the market early, ultimately causing the bubble to burst, and earned the most money. The others displayed what former Federal Reserve chairman Alan Greenspan called "irrational exuberance" and lost their proverbial shirts.

A paper about the experiment and the team's findings appears this week in the journal Proceedings of the National Academy of Sciences. Alec Smith, the lead author on the paper, is a visiting associate at Caltech. Additional coauthors are from the Virginia Tech Carilion Research Institute.

The researchers set up a simple experimental market in which they were able to control the fundamental, or actual, value of a traded risky asset. In each of 16 sessions, about 20 participants were told how an on-screen trading market worked and were given 100 units of experimental currency and six shares of the risky asset. Then, over the course of 50 trading periods, the traders indicated by pressing keyboard buttons whether they wanted to buy, sell, or hold shares at various prices.  

Given the way the experiment was set up, the fundamental price of the risky asset was 14 currency units. Yet in many sessions, the traded price rose well above that—sometimes three to five times as high—creating bubble markets that eventually crashed.

During the experiment, two or three additional subjects per session also participated in the market while having their brains scanned by a functional magnetic resonance imaging (fMRI) machine. In fMRI, blood flow is monitored and used as a proxy for brain activation. If a brain region shows a relatively high level of blood oxygenation during a task, that region is thought to be particularly active.

At the end of the experiment, the researchers first sought to understand the behavioral data—the choices the participants made and the resulting market activity—before analyzing the fMRI scans.

"The first thing we saw was that even in an environment where you don't have squawking heads and all kinds of other information being fed to people, you can get bubbles just through pricing dynamics that occur naturally," says Camerer. This finding is at odds with what some economists have held—that bubbles are rare or are caused by misinformation or hype.

Next, the researchers divided the participants into three categories based on their earnings during their 50 trading periods—low, medium, and high earners. They found that the low earners tended to be momentum buyers who started buying as prices went up and then kept buying even as prices tanked. The middle-of-the-road folks didn't take many risks at all and, as a result, neither made nor lost the most money. And the traders who earned the most bought early and sold when prices were on the rise.

"The high-earning traders are the most interesting people to us," Camerer says. "Emotionally, they have to do something really hard: sell into a rising market. We thought that something must be going on in their brains that gives them an early warning signal."

To reveal what was actually occurring in the brains of the subjects—and the nature of that warning signal—Camerer and his colleagues analyzed the fMRI scans. Using this data, the researchers first looked for an area of the brain that was unusually active when the results screen came up that told participants their outcome for the last trading period. It turned out that a region called the nucleus accumbens (NAcc) lit up at that time in all participants, showing more activity when shares were bought or sold. The NAcc is associated with reward processing—it lights up when people are given expected rewards such as money or juice or a smile, for example. So it was not particularly surprising to see that the NAcc was activated when traders found out how their gambles paid off.

What was surprising, though, was that low earners were very sensitive to activity in the NAcc: when they experienced the most activity in the NAcc, they bought a lot of the risky asset. "That is a correlation we can call irrational exuberance," Camerer says. "Exuberance is the brain signal, and the irrational part is buying so many shares. The people who make the most money have low sensitivity to the same brain signal. Even though they're having the same mental reaction, they're not translating it into buying as aggressively."

Returning to the question of the high earners and their early warning signal, the researchers hypothesized that a part of the brain called the insular cortex, or insula, might be serving as that bellwether. The insula was a good candidate because previous studies had linked it to financial uncertainty and risk aversion. It is also known to reflect negative emotions associated with bodily sensations such as being shocked or smelling something disgusting, or even with feelings of social discomfort like those that come with being treated unfairly or being excluded.

Looking at the brain data of the high earners, the researchers found that insula activity did indeed increase shortly before the traders switched from buying to selling. And again, Camerer notes, "The prices were still going up at that time, so they couldn't be making pessimistic predictions just based on the recent price trend. We think this is a real warning signal."

Meanwhile, in the low earners, insula activity actually decreased, perhaps allowing their irrational exuberance to continue unchecked.  

Read Montague, director of the Human Neuroimaging Laboratory at the Virginia Tech Carilion Research Institute and one of the paper's senior authors, emphasizes the importance of group dynamics, or group thinking, in the study. "Individual human brains are indeed powerful alone, but in groups we know they can build bridges, spacecraft, microscopes, and even economic systems," he says. "This is one of the next frontiers in neuroscience—understanding the social mind."

Additional coauthors on the paper, "Irrational exuberance and neural warning signals during endogenous experimental market bubbles," include Terry Lohrenz and Justin King of Virginia Tech Carilion Research Institute in Roanoke, Virginia. Montague is also a professor at the Wellcome Trust Centre for Neuroimaging at University College London. The work was supported by the National Science Foundation, the Betty and Gordon Moore Foundation, and the Lipper Family Foundation.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Sorting Out Emotions

Evaluating another person's emotions based on facial expressions can sometimes be a complex task. As it turns out, this process isn't so easy for the brain to sort out either. Building on previous studies targeting the amygdala, a region in the brain known to be important for the processing of emotional reactions, a team of researchers from Caltech, Cedars-Sinai Medical Center, and Huntington Memorial Hospital in Pasadena, have found that some brain cells recognize emotions based on the viewer's preconceptions rather than the true emotion being expressed. In other words, it's possible for the brain to be biased. The team was able to record these responses from single neurons using existing electrodes—indicated by the arrows in the MRI image at right—placed in the brains of patients who were being treated for epilepsy. Participants were shown images of partially obscured faces showing either happiness or fear (see secondary image) and were asked to guess the emotion being shown. According to the researchers, the brain responded similarly whether or not the patient guessed the correct emotion.

"These are very exciting findings suggesting that the amygdala doesn't just respond to what we see out there in the world, but rather to what we imagine or believe about the world," says Ralph Adolphs, the Bren Professor of Psychology and Neuroscience at Caltech and coauthor of a paper that discusses the team's study.  "It's particularly interesting because the amygdala has been linked to so many psychiatric diseases, ranging from anxiety to depression to autism.  All of those diseases are about experiences happening in the minds of the patients, rather than objective facts about the world that everyone shares."

What's next?  Says Shuo Wang, a postdoctoral fellow at Caltech and first author of the paper,  "Of course, the amygdala doesn't accomplish anything by itself.  What we need to know next is what happens elsewhere in the brain,  so we need to record not only from the amygdala, but also from other brain regions with which the amygdala is connected."

The paper, which also included Caltech postdoctoral scholar Oana Tudusciuc, was published on June 30 in the Early Edition of the Proceedings of the National Academy of Science.

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Kip Thorne Discusses First Discovery of Thorne-Żytkow Object

In 1975, Kip Thorne (BS '62, and the Richard P. Feynman Professor of Theoretical Physics, Emeritus) and then-Caltech postdoctoral fellow Anna Żytkow sought the answer to an intriguing question: Would it be possible to have a star that had a neutron star as its core—that is, a hot, dense star composed entirely of neutrons within another more traditional star? Thorne and Żytkow predicted that if a neutron star were at the core of another star, the host star would be a red supergiant—an extremely large, luminous star—and that such red supergiants would have peculiar abundances of elements. Researchers who followed this line of inquiry referred to this hypothetical type of star as a Thorne-Żytkow object (TŻO).

Nearly 40 years later, astronomers believe they may have found such an object: a star labeled HV 2112 and located in the Small Magellanic Cloud, a dwarf galaxy that is a near neighbor of the Milky Way and visible to the naked eye. HV 2112 was identified as a TŻO candidate with the 6.5-meter Magellan Clay telescope on Las Campanas in Chile by Emily Levesque (University of Colorado), Philip Massey (Lowell Observatory; BS '75, MS '75, Caltech), Żytkow (now at the University of Cambridge), and Nidia Morrell (also at the University of Cambridge).

We recently sat down with Thorne to ask how it feels to have astronomers discover something whose existence he postulated decades before.

When you came up with the idea of TŻOs, were you trying to explain anything that had been observed, or was it a simple "what if?" speculation?

It was totally theoretical. We weren't the first people to ask the question either. In the mid-1930s, theoretical physicist George Gamow speculated about these kinds of objects and wondered if even our sun might have a neutron star in its core. That was soon after Caltech's Fritz Zwicky conceived the idea of a neutron star. But Gamow never did anything quantitative with his speculations.

The idea of seriously pursuing what these things might look like was due to Bohdan Paczynski, a superb astrophysicist on the faculty of the University of Warsaw. In the early 1970s, he would shuttle back and forth between Caltech, where he would spend about three months a year, and Warsaw, where he stayed for nine months. He had a real advantage over everybody else during this era when people were trying to understand stellar structure and stellar evolution in depth. Nine months of the year he didn't have a computer available, so he had to think. Then during the three months he was at Caltech, he could compute.

Paczynski was the leading person in the world in understanding the late stages of the evolution of stars. He suggested to his postdoctoral student Anna Żytkow that she look into this idea of stars with neutron cores, and then Anna easily talked me into joining her on the project, and came to Caltech for a second postdoc. I had the expertise in relativity, and she had a lot better understanding of the astrophysics of stars than I did. So it became a very enjoyable collaboration. For me it was a learning process. As one often does as a professor, I learned from working with a superb postdoc who had key knowledge and experience that I did not have.

What were the properties of TŻOs as you and Żytkow theorized them?

We didn't know in advance what they would look like, though we thought—correctly it turns out—that they would be red supergiants. Our calculations showed that if the star was heavier than about 11 suns, it would have a shell of burning material around the neutron core, a shell that would generate new elements as it burned. Convection, the circulation of hot gas inside the star, would reach right into the burning shell and carry the products of burning all the way to the surface of the star long before the burning was complete. This convection, reaching into a burning shell, was unlike anything seen in any other kind of star.

Is this how you get different elements in TŻOs than those ordinarily seen on the surface of a star?

That's right. We could see that the elements produced would be peculiar, but our calculations were not good enough to make this quantitative. In the 1990s, a graduate student of mine named Garrett Biehle (PhD '93) worked out, with considerable reliability, what the products of nuclear burning would be. He predicted unusually large amounts of rubidium and molybdenum; and a bit later Philipp Podsiadlowski, Robert Cannon, and Martin Rees at the University of Cambridge showed there would also be a lot of lithium.

It is excess rubidium, molybdenum, and lithium that Żytkow and her colleagues have found in HV 2112.

Does that mean TŻOs are fairly easy to recognize with a spectrographic analysis, which can determine the elements of a star?

No, it's not easy! TŻOs should have a unique signature, but these objects would be pretty rare.

What are the circumstances in which a TŻO would develop?

As far as we understand it, the most likely way these things form is that a neutron star cannibalizes the core of a companion star. You have a neutron star orbiting around a companion star, and they spiral together, and the neutron star takes up residence in the core of the companion. Bohdan Paczynski and Jerry Ostriker, an astrophysicist at Princeton University, speculated this would happen way back in 1975 while I was doing my original work with Żytkow, and subsequent analyses have confirmed it.

The other way a TŻO might develop is from the supernova explosion that makes the neutron star. In a supernova that creates a neutron star, matter is ejected in an asymmetric way. Occasionally these kicks resulting from the ejection of matter will drive the neutron star into the interior of the companion star, according to analyses by Peter Leonard and Jack Hills at Los Alamos, and Rachel Dewey at JPL.

Is there anything other than peculiar element abundances that would indicate a TŻO? Does it look different from other red supergiant stars?

TŻOs are the most highly luminous of red supergiant stars but not so much so that you could pick them out from the crowd: all red supergiants are very bright. I think the only way to identify them is through these element abundances.

Are you convinced that this star discovered by Żytkow and her colleagues is a TŻO?

The evidence that HV 2112 is a TŻO is strong but not ironclad. Certainly it's by far the best candidate for a TŻO that anyone has seen, but additional confirmation is needed.

How does it feel to hear that something you imagined on paper so long ago has been seen out in the universe?

It's certainly satisfying. It's an area of astrophysics that I dipped into briefly and then left. That's one of the lovely things about being a theorist: you can dip into a huge number of different areas. One of the things I've most enjoyed about my career is moving from one area to another and learning new astrophysics. Anna Żytkow deserves the lion's share of the credit for this finding. She pushed very hard on observers to get some good telescope time. It's her tenacity more than anything else that made this happen.

What are you working on now that you are retired?

I'm an executive producer of the film Interstellar, directed by Christopher Nolan and based in part on the science I've done during my Caltech career. Greater secrecy surrounds Interstellar than most any movie that's been done in Hollywood. I'm not allowed to talk about it, but let's just say that I've been spending a lot of my time on it in the last year. And I've recently finished writing a book about the science in Interstellar.

The other major project I'm wrapping up is a textbook that I've written with Roger Blandford [formerly a professor at Caltech; now on the faculty at Stanford]: Modern Classical Physics. It's based on a course that Roger or I taught every other year at Caltech from 1980 until my retirement in 2009. It covers fluid mechanics, elasticity, optics, statistical physics, plasma physics, and curved space-time—that is, everything in classical physics that any PhD physicist should be familiar with, but usually isn't. This week we delivered the manuscript to the copy editor. After 34 years of developing this monumental treatise/textbook, it's quite a relief.

I'm also working with some of my former students and postdocs on trying to understand the nonlinear dynamics of curved space-time. For this we gain insights from numerical relativity: simulations of the collisions of spinning black holes. But I've had to shelve this work for the past half year due to the pressures of the movie and books. I hope to return to it soon.

Writer: 
Cynthia Eller
Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Watching Nanoscale Fluids Flow

At the nanoscale, where objects are measured in billionths of meters and events transpire in trillionths of seconds, things do not always behave as our experiences with the macro-world might lead us to expect. Water, for example, seems to flow much faster within carbon nanotubes than classical physics says should be possible. Now imagine trying to capture movies of these almost imperceptibly small nanoscale movements.

Researchers at Caltech now have done just that by applying a new imaging technique called four-dimensional (4D) electron microscopy to the nanofluid dynamics problem. In a paper appearing in the June 27 issue of Science, Ahmed Zewail, the Linus Pauling Professor of Chemistry and professor of physics, and Ulrich Lorenz, a postdoctoral scholar in chemistry, describe how they visualized and monitored the flow of molten lead within a single zinc oxide nanotube in real time and space.

The 4D microscopy technique was developed in the Physical Biology Center for Ultrafast Science and Technology at Caltech, created and directed by Zewail to advance understanding of the fundamental physics of chemical and biological behavior. 

In 4D microscopy, a stream of ultra-fast-moving electrons bombards a sample in a carefully timed manner. Each electron scatters off the sample, producing a still image that represents a single moment, just a femtosecond—or a millionth of a billionth of a second—in duration. Millions of the still images can then be stitched together to produce a digital movie of nanoscale motion.

In the new work, Lorenz and Zewail used single laser pulses to melt the lead cores of individual zinc oxide nanotubes and then, using 4D microscopy, captured how the hot pressurized liquid moved within the tubes—sometimes splitting into multiple segments, producing tiny droplets on the outside of the tube, or causing the tubes to break. Lorenz and Zewail also measured the friction experienced by the liquid in the nanotube.

"These observations are particularly significant because visualizing the behavior of fluids at the nanoscale is essential to our understanding of how materials and biological channels effectively transport liquids," says Zewail. In 1999, Zewail won the Nobel Prize for his development of femtosecond chemistry.

The paper is titled "Observing liquid flow in nanotubes by 4D electron microscopy." The work was supported by the National Science Foundation, the Air Force Office of Scientific Research, and the Gordon and Betty Moore Foundation. Lorenz was partially supported by a fellowship from the Swiss National Science Foundation.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech-Led Team Develops a Geothermometer for Methane Formation

Methane is a simple molecule consisting of just one carbon atom bound to four hydrogen atoms. But that simplicity belies the complex role the molecule plays on Earth—it is an important greenhouse gas, is chemically active in the atmosphere, is used in many ecosystems as a kind of metabolic currency, and is the main component of natural gas, which is an energy source.

Methane also poses a complex scientific challenge: it forms through a number of different biological and nonbiological processes under a wide range of conditions. For example, microbes that live in cows' stomachs make it; it forms by thermal breakdown of buried organic matter; and it is released by hot hydrothermal vents on the sea floor. And, unlike many other, more structurally complex molecules, simply knowing its chemical formula does not necessarily reveal how it formed. Therefore, it can be difficult to know where a sample of methane actually came from.

But now a team of scientists led by Caltech geochemist John M. Eiler has developed a new technique that can, for the first time, determine the temperature at which a natural methane sample formed. Since methane produced biologically in nature forms below about 80°C, and methane created through the thermal breakdown of more complex organic matter forms at higher temperatures (reaching 160°C–200°C, depending on the depth of formation), this determination can aid in figuring out how and where the gas formed.

A paper describing the new technique and its first applications as a geothermometer appears in a special section about natural gas in the current issue of the journal Science. Former Caltech graduate student Daniel A. Stolper (PhD '14) is the lead author on the paper.

"Everyone who looks at methane sees problems, sees questions, and all of these will be answered through basic understanding of its formation, its storage, its chemical pathways," says Eiler, the Robert P. Sharp Professor of Geology and professor of geochemistry at Caltech.

"The issue with many natural gas deposits is that where you find them—where you go into the ground and drill for the methane—is not where the gas was created. Many of the gases we're dealing with have moved," says Stolper. "In making these measurements of temperature, we are able to really, for the first time, say in an independent way, 'We know the temperature, and thus the environment where this methane was formed.'"

Eiler's group determines the sources and formation conditions of materials by looking at the distribution of heavy isotopes—species of atoms that have extra neutrons in their nuclei and therefore have different chemistry. For example, the most abundant form of carbon is carbon-12, which has six protons and six neutrons in its nucleus. However, about 1 percent of all carbon possesses an extra neutron, which makes carbon-13. Chemicals compete for these heavy isotopes because they slow molecular motions, making molecules more stable. But these isotopes are also very rare, so there is a chemical tug-of-war between molecules, which ends up concentrating the isotopes in the molecules that benefit most from their stabilizing effects. Similarly, the heavy isotopes like to bind, or "clump," with each other, meaning that there will be an excess of molecules containing two or more of the isotopes compared to molecules containing just one. This clumping effect is strong at low temperatures and diminishes at higher temperatures. Therefore, determining how many of the molecules in a sample contain heavy isotopes clumped together can tell you something about the temperature at which the sample formed.

Eiler's group has previously used such a "clumped isotope" technique to determine the body temperatures of dinosaurs, ground temperatures in ancient East Africa, and surface temperatures of early Mars. Those analyses looked at the clumping of carbon-13 and oxygen-18 in various minerals. In the new work, Eiler and his colleagues were able to examine the clumping of carbon-13 and deuterium (hydrogen-2).

The key enabling technology was a new mass spectrometer that the team designed in collaboration with Thermo Fisher, mixing and matching existing technologies to piece together a new platform. The prototype spectrometer, the Thermo IRMS 253 Ultra, is equipped to analyze samples in a way that measures the abundances of several rare versions, or isotopologues, of the methane molecule, including two "clumped isotope" species: 13CH3D, which has both a carbon-13 atom and a deuterium atom, and 12CH2D2, which includes two deuterium atoms.

Using the new spectrometer, the researchers first tested gases they made in the laboratory to make sure the method returned the correct formation temperatures.

They then moved on to analyze samples taken from environments where much is known about the conditions under which methane likely formed. For example, sometimes when methane forms in shale, an impermeable rock, it is trapped and stored, so that it cannot migrate from its point of origin. In such cases, detailed knowledge of the temperature history of the rock constrains the possible formation temperature of methane in that rock. Eiler and Stolper analyzed samples of methane from the Haynesville Shale, located in parts of Arkansas, Texas, and Louisiana, where the shale is not thought to have moved much after methane generation. And indeed, the clumped isotope technique returned a range of temperatures (169°C–207°C) that correspond well with current reservoir temperatures (163°C–190°C). The method was also spot-on for methane collected from gas that formed as a product of oil-eating bugs living on top of oil reserves in the Gulf of Mexico. It returned temperatures of 34°C and 48°C plus or minus 8°C for those samples, and the known temperatures of the sampling locations were 42°C and 48°C, respectively.

To validate further the new technique, the researchers next looked at methane from the Marcellus Shale, a formation beneath much of the Appalachian basin, where the gas-trapping rock is known to have formed at high temperature before being uplifted into a cooler environment. The scientists wanted to be sure that the methane did not reset to the colder temperature after formation. Using their clumped isotope technique, the researchers verified this, returning a high formation temperature.

"It must be that once the methane exists and is stable, it's a fossil remnant of what its formation environment was like," Eiler says. "It only remembers where it formed."

An important application of the technique is suggested by the group's measurements of methane from the Antrim Shale in Michigan, where groundwater contains both biologically and thermally produced methane. Clumped isotope temperatures returned for samples from the area clearly revealed the different origins of the gases, hitting about 40°C for a biologically produced sample and about 115°C for a sample involving a mix of biologically and thermally produced methane.

"There are many cases where it is unclear whether methane in a sample of groundwater is the product of subsurface biological communities or has leaked from petroleum-forming systems," says Eiler. "Our results from the Antrim Shale indicate that this clumped isotope technique will be useful for distinguishing between these possible sources."

One final example, from the Potiguar Basin in Brazil, demonstrates another way the new method will serve geologists. In this case the methane was dissolved in oil and had been free to migrate from its original location. The researchers initially thought there was a problem with their analysis because the temperature they returned was much higher than the known temperature of the oil. However, recent evidence from drill core rocks from the region shows that the deepest parts of the system actually got very hot millions of years ago. This has led to a new interpretation suggesting that the methane gas originated deep in the system at high temperatures and then percolated up and mixed into the oil.

"This shows that our new technique is not just a geothermometer for methane formation," says Stolper. "It's also something you can use to think about the geology of the system."

The paper is titled "Formation temperatures of thermogenic and biogenic methane." Along with Eiler and Stolper, additional coauthors are Alex L. Sessions, professor of geobiology at Caltech; Michael Lawson and Cara L. Davis of ExxonMobil Upstream Research Company; Alexandre A. Ferreira and Eugenio V. Santos Neto of Petrobas Research and Development Center; Geoffrey S. Ellis and Michael D. Lewan of the U.S. Geological Survey in Denver; Anna M. Martini of Amherst College; Yongchun Tang of the Power, Environmental, and Energy Research Institute in Covina, California; and Martin Schoell of GasConsult International Inc. in Berkeley, California. The work was supported by the National Science Foundation, Petrobras, and ExxonMobil.

Writer: 
Kimm Fesenmaier
Listing Title: 
Geothermometer for Methane Formation
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Growing Unknown Microbes One by One

A new technique developed at Caltech helps grow individual species of the unknown microbes that live in the human body.

Trillions of bacteria live in and on the human body; a few species can make us sick, but many others keep us healthy by boosting digestion and preventing inflammation. Although there's plenty of evidence that these microbes play a collective role in human health, we still know very little about most of the individual bacterial species that make up these communities. Employing the use of a specially designed glass chip with tiny compartments, Caltech researchers now provide a way to target and grow specific microbes from the human gut—a key step in understanding which bacteria are helpful to human health and which are harmful.

The work was published the week of June 23 in the Proceedings of the National Academy of Sciences.

Although a few bacterial species are easy to grow in the laboratory, needing only a warm environment and plenty of food to multiply, most species that grow in and on the human body have never been successfully grown in lab conditions. It's difficult to recreate the complexity of the microbiome—the entire human microbial community—in one small plate (a lidded dish with nutrients used to grow microbes), says Rustem Ismagilov, Ethel Wilson Bowles and Robert Bowles Professor of Chemistry and Chemical Engineering at Caltech.

There are thousands of species of microbes in one sample from the human gut, Ismagilov says, "but when you grow them all together in the lab, the faster-growing bacteria will take over the plate and the slow-growing ones don't have a chance—leading to very little diversity in the grown sample." Finding slow-growing microbes of interest is like finding a needle in a haystack, he says, but his group wanted to work out a way to "just grow the needle without growing the hay."

To do this, Liang Ma, a postdoctoral scholar in Ismagilov's lab, developed a way to isolate and cultivate individual bacterial species of interest. He and his colleagues began by looking for bacterial species that contained a set of specific genetic sequences. The targeted gene sequences belong to organisms on the list of "Most Wanted" microbes—a list developed by the National Institutes of Health (NIH) Human Microbiome Project. The microbes carrying these genetic sequences are found abundantly in and on the human body, but have been difficult to grow in the lab.

To grow these elusive microbes, the Caltech researchers turned to SlipChip, a microfluidic device previously developed in Ismagilov's lab. SlipChip is made up of two glass slides, each the size of a credit card, that have tiny etched grooves which become channels when the grooved surfaces are stacked atop one another. When a sample—say, a jumbled-up assortment of bacteria species collected from a colonoscopy biopsy—is added to the interconnected channels of the SlipChip, a single "slip" of the top chip will turn the channels into individual wells, with each well ideally holding a single microbe. Once sequestered in an isolated well, each individual bacterium can divide and grow without having to compete for resources with other types of faster-growing microbes.

The researchers then needed to determine which compartment of the SlipChip contained a colony of the target bacterium—which is not a simple task, says Ismagilov. "It's a Catch-22—you have to kill the organism in order to find its DNA sequence and figure out what it is, but you want a live organism at the end of the day, so that you can grow and study this new microbe," he says. "Liang solves this in a really clever way; he grows a compartment full of his target microbe in the SlipChip, then he splits the compartment in half. One half contains the live organism and the other half is sacrificed for its DNA to confirm that the sequence is that of the target microbe."

The method of creating two halves in each well in the SlipChip will be published separately in an upcoming issue of the journal Integrative Biology.

To validate the new methodology, the researchers isolated one specific bacterium from the Human Microbiome Project's "Most Wanted" list. The investigators used the SlipChip to grow this bacterium in a tiny volume of the washing fluid that was used to collect the gut bacteria sample from a volunteer. Since bacteria often depend on nutrients and signals from the extracellular environment to support growth, the substances from this fluid were used to recreate this environment within the tiny SlipChip compartment—a key to successfully growing the difficult organism in the lab.

After growing a pure culture of the previously unidentified bacterium, Ismagilov and his colleagues obtained enough genetic material to sequence a high-quality draft genome of the organism. Although a genomic sequence of the new organism is a useful tool, further studies are needed to learn how this species of microbe is involved in human health, Ismagilov says.

In the future, the new SlipChip technique may be used to isolate additional previously uncultured microbes, allowing researchers to focus their efforts on important targets, such as those that may be relevant to energy applications and the production of probiotics. The technique, says Ismagilov, allows researchers to target specific microbes in a way that was not previously possible.

The paper is titled "Gene-targeted microfluidic cultivation validated by isolation of a gut bacterium listed in Human Microbiome Project's Most Wanted taxa." In addition to Liang and Ismagilov, other coauthors include, from Caltech, associate scientist Mikhail A. Karymov, graduate student Jungwoo Kim, and postdoctoral scholar Roland Hatzenpichler, and, from the University of Chicago department of medicine, Nathanial Hubert, Ira M. Hanan, and Eugene B. Chang. The work was funded by NIH's National Human Genome Research Institute. Microfluidic technologies developed by Ismagilov's group have been licensed to Emerald BioStructures, RanDance Technologies, and SlipChip Corporation, of which Ismagilov is a cofounder.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Earth-Building Bridgmanite

Our planet's most abundant mineral now has a name

Deep below the earth's surface lies a thick, rocky layer called the mantle, which makes up the majority of our planet's volume. For decades, scientists have known that most of the lower mantle is a silicate mineral with a perovskite structure that is stable under the high-pressure and high-temperature conditions found in this region. Although synthetic examples of this composition have been well studied, no naturally occurring samples had ever been found in a rock on the earth's surface. Thanks to the work of two scientists, naturally occurring silicate perovskite has been found in a meteorite, making it eligible for a formal mineral name.

The mineral, dubbed bridgmanite, is named in honor of Percy Bridgman, a physicist who won the 1946 Nobel Prize in Physics for his fundamental contributions to high-pressure physics.

"The most abundant mineral of the earth now has an official name," says Chi Ma, a mineralogist and director of the Geological and Planetary Sciences division's Analytical Facility at Caltech.

"This finding fills a vexing gap in the taxonomy of minerals," adds Oliver Tschauner, an associate research professor at the University of Nevada-Las Vegas who identified the mineral together with Ma.

High-pressure and temperature experiments, as well as seismic data, strongly suggest that (Mg,Fe)SiO3-perovskite—now simply called bridgmanite—is the dominant material in the lower mantle. But since it is impossible to get to the earth's lower mantle, located some 400 miles deep within the planet, and rocks brought to the earth's surface from the lower mantle are exceedingly rare, naturally occurring examples of this material had never been fully described.

That is until Ma and Tschauner began poking around a sample from the Tenham meteorite, a space rock that fell in Australia in 1879.

Because the 4.5 billion-year-old meteorite had survived high-energy collisions with asteroids in space, parts of it were believed to have experienced the high-pressure conditions we see in the earth's mantle. That, scientists thought, made it a good candidate for containing bridgmanite.

Tschauner used synchrotron X-ray diffraction mapping to find indications of the mineral in the meteorite. Ma then examined the mineral and its surroundings with a high-resolution scanning electron microscope and determined the composition of the tiny bridgmanite crystals using an electron microprobe. Next, Tschauner analyzed the crystal structure by synchrotron diffraction. After five years and multiple experiments, the two were finally able to gather enough data to reveal bridgmanite's chemical composition and crystal structure.

"It is a really cool discovery," says Ma. "Our finding of natural bridgmanite not only provides new information on shock conditions and impact processes on small bodies in the solar system, but the tiny bridgmanite found in a meteorite could also help investigations of phase transformation mechanisms in the deep Earth. "

The mineral and the mineral name were approved on June 2 by the International Mineralogical Association's Commission on New Minerals, Nomenclature and Classification. 

The researchers' findings are published in the November 28 issue of Science, in an article titled "Discovery of Bridgmanite, the Most Abundant Mineral in Earth, In a Shocked Meteorite."

Writer: 
Katie Neith
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news