Programmed to Fold: RNA Origami

Researchers from Aarhus University in Denmark and Caltech have developed a new method for organizing molecules on the nanoscale. Inspired by techniques used for folding DNA origami—first invented by Paul Rothemund, a senior research associate in computation and neural systems in the Division of Engineering and Applied Science at Caltech—the team, which includes Rothemund, has fabricated complicated shapes from DNA's close chemical cousin, RNA.

Unlike DNA origami, whose components are chemically synthesized and then folded in an artificial heating and cooling process, RNA origami are synthesized enzymatically and fold up as they are being synthesized, which takes place under more natural conditions compatible with living cells. These features of RNA origami may allow designer RNA structures to be grown within living cells, where they might be used to organize cellular enzymes into biochemical factories.

"The parts for a DNA origami cannot easily be written into the genome of an organism. An RNA origami, on the other hand, can be represented as a DNA gene, which in cells is transcribed into RNA by a protein machine called RNA polymerase," explains Rothemund.

So far, the researchers have demonstrated their method by designing RNA molecules that fold into rectangles and then further assemble themselves into larger honeycomb patterns. This approach was taken to make the shapes recognizable using an atomic force microscope, but many other shapes should be realizable.

A paper describing the research appears in the August 15 issue of the journal Science.

"What is unique about the method is that the folding recipe is encoded into the molecule itself, through its sequence." explains first author Cody Geary, a postdoctoral scholar at Aarhus University.

In other words, the sequence of the RNAs defines both the final shape, and the order in which different parts of the shape fold. The particular RNA sequences that were folded in the experiment were designed using software called NUPACK, created in the laboratory of Caltech professor Niles Pierce. Both the Rothemund and Pierce labs are funded by a National Science Foundation Molecular Programming Project (MPP) Expeditions in Computing grant.

"Our latest research is an excellent example of how tools developed by one part of the MPP are being used by another," says Rothemund.

"RNA has a richer structural and functional repertoire than DNA, and so I am especially interested in how complex biological motifs with special 3-D geometries or protein-binding regions can be added to the basic architecture of RNA origami," says Geary, who completed his BS in chemistry at Caltech in 2003.

The project began with an extended visit by Geary and corresponding author Ebbe Andersen, also from Aarhus University, to Rothemund's Caltech lab.

"RNA origami is still in its infancy," says Rothemund. "Nevertheless, I believe that RNA origami, because of their potential to be manufactured by cells, and because of the extra functionality possible with RNA, will have at least as big an impact as DNA origami."

Rothemund (BS '94) reported the original method for DNA origami in 2006 in the journal Nature. Since then, the work has been cited over 2,000 times and DNA origami have been made in over 50 labs worldwide for potential applications such as drug delivery vehicles and molecular computing.

"The payoff is that unlike DNA origami, which are expensive and have to be made outside of cells, RNA origami should be able to be grown cheaply in large quantities, simply by growing bacteria with genes for them," he adds. "Genes and bacteria cost essentially nothing to share, and so RNA origami will be easily exchanged between scientists."

 

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Study of Aerosols Stands to Improve Climate Models

Aerosols, tiny particles in the atmosphere, play a significant role in Earth's climate, scattering and absorbing incoming sunlight and affecting the formation and properties of clouds. Currently, the effect that these aerosols have on clouds represents the largest uncertainty among all influences on climate change.

But now researchers from Caltech and the Jet Propulsion Laboratory have provided a global observational study of the effect that changes in aerosol levels have on low-level marine clouds—the clouds that have the largest impact on the amount of incoming sunlight that Earth reflects back into space. The findings appear in the advance online version of the journal Nature Geoscience.

Changes in aerosol levels have two main effects—they alter the amount of clouds in the atmosphere and they change the internal properties of those clouds. Using measurements from several of NASA's Earth-monitoring satellites from August 2006 through April 2011, the researchers quantified for the first time these two effects from 7.3 million individual data points.

"If you combine these two effects, you get an aerosol influence almost twice that estimated in the latest report from the Intergovernmental Panel on Climate Change," says John Seinfeld, the Louis E. Nohl Professor and professor of chemical engineering at Caltech. "These results offer unique guidance on how warm cloud processes should be incorporated in climate models with changing aerosol levels."

The lead author of the paper, "Satellite-based estimate of global aerosol-cloud radiative forcing by marine warm clouds," is Yi-Chun Chen (Ph.D. '13), a NASA postdoctoral fellow at JPL. Additional coauthors are Matthew W. Christensen of JPL and Colorado State University and Graeme L. Stephens, director of the Center for Climate Sciences at JPL. The work was supported by funding from NASA and the Office of Naval Research.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Looking Forward to 2020 . . . on Mars

A Q&A With Project Scientist Ken Farley

While the Curiosity rover continues to interrogate Gale Crater on Mars, planning is well under way for its successor—another rover that is currently referred to as Mars 2020. The new robotic explorer, scheduled to launch in 2020, will use much of the same technology (even some of the spare parts Curiosity left behind on Earth) to get to the Red Planet. Once there, it will pursue a new set of scientific objectives including the careful collection and storage (referred to as "caching") of compelling samples that might one day be returned to Earth by a future mission. Today, NASA announced the selection of seven scientific instruments that Mars 2020 will carry with it to Mars.

Ken Farley, Caltech's W.M. Keck Foundation Professor of Geochemistry and chair of the Division of Geological and Planetary Sciences, is serving as project scientist for Mars 2020. We recently sat down with him to talk about the mission and his new role.

 

Congratulations on being selected project scientist for this exciting mission. For those of us who do not know exactly what a project scientist does, can you give us a little overview of the job?

Sure. Conveniently, NASA has a definition, which says that the project scientist is responsible for the overall scientific success of the mission. That's a pretty concise explanation, but it encompasses a lot. My main duty thus far has been helping to define the science needs for equipment that we are going to send to Mars. So while we haven't actually done any science yet, we have had to make a lot of design decisions that are related to the science.

The easiest place to illustrate this is in the discussion of what is necessary, from the science point of view, in terms of the samples that we will cache. We have to consider things like how much mass we need to bring back, what kind of magnetic fields and temperatures the samples are going to be exposed to, and how much contamination of different chemical constituents we can allow. Every one of those questions drives a design decision in how you build the drilling system and the caching system. And if you get those wrong, there's nothing you can do. So there's a lot of thought that has to be put into that, and I convey a lot of that information to the engineers.

Now that we have a science team, I will be helping to facilitate all of its investigations and helping the members to work as a team. MSL [the Mars Science Laboratory, Curiosity's mission] is demonstrating how you have to operate when you have a complex tool (a rover) and a bunch of sensors, and every day you have to figure out what you're going to do to further science. The team has to pull together, pool all of its information, and come up with a plan, so an important part of my job will be figuring out how to manage the team dynamics to keep everybody moving forward and not fragmenting.

 

What aspects of the job were particularly appealing to you?

One of the parts of being a division chair that I have really enjoyed is being engaged with something that's bigger than my own research. And there's definitely a lot of that on 2020. It's a huge undertaking. There are not many science projects of this scale to be associated closely with, so this just seemed like a really good opportunity.

The kinds of questions that 2020 is going after—they're really big questions. You could never answer them on your own. The key objective is about life—is there or was there ever life on Mars, and more broadly what does its presence or absence mean about the frequency and evolution of life within the universe? There's no way you could answer these questions on Earth. The simple reason for that is that Mars is covered by rocks that are of the era in which, at least on our planet, we believe life was evolving. There are almost no rocks left of that age on the earth, and the ones that are left have been really badly beaten up. So Mars is a place where you really stand a chance of answering these questions in a way that you probably can't anywhere else.

It's not the kind of science I'm usually associated with, but the mission is trying to address truly profound scientific questions.

 

As you said, space has not been the focus of your research for most of your career. Can you talk a bit about how a terrestrial geochemist like yourself wound up in this role on a Mars mission?

Several years ago, I participated in a workshop about quantifying martian stratigraphy, which was hosted by the Keck Institute for Space Studies [KISS]. One of the topics that was discussed was geochronology—the dating of rocks and other materials—on other planetary bodies, like Mars. This is important for establishing the history of a planet and is particularly challenging because it requires such exacting measurements. After interacting with some people who are now my JPL collaborators at the workshop, it seemed like we might be able to do something special that would help solve this problem. And we got support from KISS to do a follow-on study.

As I was getting deeper and deeper into thinking about how we could do this on Mars, John Grotzinger (the Fletcher Jones Professor of Geology at Caltech and project scientist for MSL) was conducting the landing-site workshops for MSL. He would say things like, "Oh, it would be really great if we could date this." And we'd agree. Then there was a call for participating scientists on MSL. I had no background whatsoever in this, but I knew there was a mass spectrometer on Curiosity. That's one of the analytical instruments we need to make these dating measurements because it allows us to determine the relative abundances of various isotopes in a sample. Since those isotopes are produced at known rates, their abundances tell us something about the age of the sample. So I wrote a proposal basically saying let's see if we can make Curiosity's mass spectrometer work for this purpose. And it did.

 

What do you think led to your selection as project scientist?

Although I don't have a long track record in studying Mars, this mission is possibly the first step in bringing samples back to Earth. In order to do that, you have to answer a lot of questions related to geochemistry, which is my specialty. The geochemistry community is not ordinarily thinking about rocks coming back from Mars. I happen to have enough crossover between what I know about Mars from the work I just described and my background from working in geochemistry labs, especially those working with the type of very small samples we might get back from Mars, to be a good fit.

 

Given Curiosity's success on Mars, why is it important and exciting for us to be sending another rover to the Red Planet?

One thing to realize is that the surface of Mars is more or less equivalent in size to the entire continental surface area of the earth, and we've been to just a few points. It's naturally tempting to look at the few places we have been on Mars and draw grand conclusions from them, but you could imagine if you landed in the middle of the Sahara Desert and studied the earth, you would come up with different answers than if you landed in the Amazon, for example. So that's part of it.

But the big thing that distinguishes Mars 2020 is the fact that we are preparing this cache, which is the first step in a process that will hopefully bring samples back to Earth some day. It's very clear that from the science community's point of view, this is a critical motivation for this mission.

 

How has the experience been working on the mission thus far?

I enjoy it very much. It's extremely different to go from a lab group of two or three people to a project that, at the end of the day, is going to have spent $1.5 billion over the next seven or eight years. It's a completely different scale of operation.

I find it really fascinating to see how everything works. I've spent my entire career among scientists. Suddenly transitioning and working with engineers is interesting because their approach and style is completely different. But they're all extremely good at what they do.

It's a lot of fun to work with these people and to face completely new and unexpected challenges. You never know what new thing is going to pop up.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Biology Made Simpler With "Clear" Tissues

In general, our knowledge of biology—and much of science in general—is limited by our ability to actually see things. Researchers who study developmental problems and disease, in particular, are often limited by their inability to look inside an organism to figure out exactly what went wrong and when.

Now, thanks to techniques developed at Caltech, scientists can see through tissues, organs, and even an entire body. The techniques offer new insight into the cell-by-cell makeup of organisms—and the promise of novel diagnostic medical applications.

"Large volumes of tissue are not optically transparent—you can't see through them," says Viviana Gradinaru (BS '05), an assistant professor of biology at Caltech and the principal investigator whose team has developed the new techniques, which are explained in a paper appearing in the journal Cell. Lipids throughout cells provide structural support, but they also prevent light from passing through the cells. "So, if we need to see individual cells within a large volume of tissue"—within a mouse kidney, for example, or a human tumor biopsy—"we have to slice the tissue very thin, separately image each slice with a microscope, and put all of the images back together with a computer. It's a very time-consuming process and it is error prone, especially if you look to map long axons or sparse cell populations such as stem cells or tumor cells," she says.

The researchers came up with a way to circumvent this long process by making an organism's entire body clear, so that it can be peered through—in 3-D—using standard optical methods such as confocal microscopy.

The new approach builds off a technique known as CLARITY that was previously developed by Gradinaru and her collaborators to create a transparent whole-brain specimen. With the CLARITY method, a rodent brain is infused with a solution of lipid-dissolving detergents and hydrogel—a water-based polymer gel that provides structural support—thus "clearing" the tissue but leaving its three-dimensional architecture intact for study.

The refined technique optimizes the CLARITY concept so that it can be used to clear other organs besides the brain, and even whole organisms. By making clever use of an organism's own network of blood vessels, Gradinaru and her colleagues—including scientific researcher Bin Yang and postdoctoral scholar Jennifer Treweek, coauthors on the paper—can quickly deliver the lipid-dissolving hydrogel and chemical solution throughout the body.

Gradinaru and her colleagues have dubbed this new technique PARS, or perfusion-assisted agent release in situ.

Once an organ or whole body has been made transparent, standard microscopy techniques can be used to easily look through a thick mass of tissue to view single cells that are genetically marked with fluorescent proteins. Even without such genetically introduced fluorescent proteins, however, the PARS technique can be used to deliver stains and dyes to individual cell types of interest. When whole-body clearing is not necessary the method works just as well on individual organs by using a technique called PACT, short for passive clarity technique.

To find out if stripping the lipids from cells also removes other potential molecules of interest—such as proteins, DNA, and RNA—Gradinaru and her team collaborated with Long Cai, an assistant professor of chemistry at Caltech, and his lab. The two groups found that strands of RNA are indeed still present and can be detected with single-molecule resolution in the cells of the transparent organisms.

The Cell paper focuses on the use of PACT and PARS as research tools for studying disease and development in research organisms. However, Gradinaru and her UCLA collaborator Rajan Kulkarni, have already found a diagnostic medical application for the methods. Using the techniques on a biopsy from a human skin tumor, the researchers were able to view the distribution of individual tumor cells within a tissue mass. In the future, Gradinaru says, the methods could be used in the clinic for the rapid detection of cancer cells in biopsy samples.

The ability to make an entire organism transparent while retaining its structural and genetic integrity has broad-ranging applications, Gradinaru says. For example, the neurons of the peripheral nervous system could be mapped throughout a whole body, as could the distribution of viruses, such as HIV, in an animal model.

Gradinaru also leads Caltech's Beckman Institute BIONIC center for optogenetics and tissue clearing and plans to offer training sessions to researchers interested in learning how to use PACT and PARS in their own labs.

"I think these new techniques are very practical for many fields in biology," she says. "When you can just look through an organism for the exact cells or fine axons you want to see—without slicing and realigning individual sections—it frees up the time of the researcher. That means there is more time to the answer big questions, rather than spending time on menial jobs."

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Future Electronics May Depend on Lasers, Not Quartz

Nearly all electronics require devices called oscillators that create precise frequencies—frequencies used to keep time in wristwatches or to transmit reliable signals to radios. For nearly 100 years, these oscillators have relied upon quartz crystals to provide a frequency reference, much like a tuning fork is used as a reference to tune a piano. However, future high-end navigation systems, radar systems, and even possibly tomorrow's consumer electronics will require references beyond the performance of quartz.

Now, researchers in the laboratory of Kerry Vahala, the Ted and Ginger Jenkins Professor of Information Science and Technology and Applied Physics at Caltech, have developed a method to stabilize microwave signals in the range of gigahertz, or billions of cycles per second—using a pair of laser beams as the reference, in lieu of a crystal.

Quartz crystals "tune" oscillators by vibrating at relatively low frequencies—those that fall at or below the range of megahertz, or millions of cycles per second, like radio waves. However, quartz crystals are so good at tuning these low frequencies that years ago, researchers were able to apply a technique called electrical frequency division that could convert higher-frequency microwave signals into lower-frequency signals, and then stabilize these with quartz. 

The new technique, which Vahala and his colleagues have dubbed electro-optical frequency division, builds off of the method of optical frequency division, developed at the National Institute of Standards and Technology more than a decade ago. "Our new method reverses the architecture used in standard crystal-stabilized microwave oscillators—the 'quartz' reference is replaced by optical signals much higher in frequency than the microwave signal to be stabilized," Vahala says.

Jiang Li—a Kavli Nanoscience Institute postdoctoral scholar at Caltech and one of two lead authors on the paper, along with graduate student Xu Yi—likens the method to a gear chain on a bicycle that translates pedaling motion from a small, fast-moving gear into the motion of a much larger wheel. "Electrical frequency dividers used widely in electronics can work at frequencies no higher than 50 to 100 GHz. Our new architecture is a hybrid electro-optical 'gear chain' that stabilizes a common microwave electrical oscillator with optical references at much higher frequencies in the range of terahertz or trillions of cycles per second," Li says.  

The optical reference used by the researchers is a laser that, to the naked eye, looks like a tiny disk. At only 6 mm in diameter, the device is very small, making it particularly useful in compact photonics devices—electronic-like devices powered by photons instead of electrons, says Scott Diddams, physicist and project leader at the National Institute of Standards and Technology and a coauthor on the study.

"There are always tradeoffs between the highest performance, the smallest size, and the best ease of integration. But even in this first demonstration, these optical oscillators have many advantages; they are on par with, and in some cases even better than, what is available with widespread electronic technology," Vahala says.

The new technique is described in a paper that will be published in the journal Science on July 18. Other authors on this paper include Hansuek Lee, who is a visiting associate at Caltech. The work was sponsored by the DARPA's ORCHID and PULSE programs; the Caltech Institute for Quantum Information and Matter (IQIM), an NSF Physics Frontiers Center with support of the Gordon and Betty Moore Foundation; and the Caltech Kavli NanoScience Institute.

Listing Title: 
Future Electronics May Depend on Lasers
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Corals Provide Clues for Climate Change Research

Just as growth rings can offer insight into climate changes occurring during the lifespan of a tree, corals have much to tell about changes in the ocean. At Caltech, climate scientists Jess F. Adkins and Nivedita Thiagarajan use manned submersibles, like Alvin operated by the Woods Hole Oceanographic Institution, to dive thousands of meters below the surface to collect these specimens—and to shed new light on the connection between variance in carbon dioxide (CO2) levels in the deep ocean and historical glacial cycles.

A paper describing the research appears in the July 3 issue of Nature.

It has long been known that ice sheets wax and wane as the concentration of CO2 decreases and increases in the atmosphere. Adkins and his team believe that the deep ocean—which stores 60 times more inorganic sources of carbon than is found in the atmosphere—must play a vital role in this variance.

To investigate this, the researchers analyzed the calcium carbonate skeletons of corals collected from deep in the North Atlantic Ocean. The corals were built up from 11,000–18,000 years ago out of CO2 dissolved in the ocean.

"We used a new technique that has been developed at Caltech, called clumped isotope thermometry, to determine what the temperature of the ocean was in the location where the coral grew," says Thiagarajan, the Dreyfus Postdoctoral Scholar in Geochemistry at Caltech and lead author of the paper. "We also used radiocarbon dating and uranium-series dating to estimate the deep-ocean ventilation rate during this time period." 

The researchers found that the deep ocean started warming before the start of a rapid climate change event about 14,600 years ago in which the last glacial period—or most recent time period when ice sheets covered a large portion of Earth—was in the final stages of transitioning to the current interglacial period.

"We found that a warm-water-under-cold-water scenario developed around 800 years before the largest signal of warming in the Greenland ice cores, called the 'Bølling–Allerød,'" explains Adkins. "CO2 had already been rising in the atmosphere by this time, but we see the deep-ocean reorganization brought on by the potential energy release to be the pivot point for the system to switch from a glacial state, where the deep ocean can hold onto CO2, and an interglacial state, where it lets out CO2."  

"Studying Earth's climate in the past helps us understand how different parts of the climate system interact with each other," says Thiagarajan. "Figuring out these underlying mechanisms will help us predict how climate will change in the future." 

Additional authors on the Nature paper, "Abrupt pre-Bølling–Allerød warming and circulation changes in the deep ocean," are geochemist John M. Eiler and graduate student Adam V. Subhas from Caltech, and John R. Southon from UC Irvine. 

Writer: 
Katie Neith
Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Neuroeconomists Confirm Warren Buffett's Wisdom

Brain Research Suggests an Early Warning Signal Tips Off Smart Traders

Investment magnate Warren Buffett has famously suggested that investors should try to "be fearful when others are greedy and be greedy only when others are fearful."

That turns out to be excellent advice, according to the results of a new study by researchers at Caltech and Virginia Tech that looked at the brain activity and behavior of people trading in experimental markets where price bubbles formed. In such markets, where price far outpaces actual value, it appears that wise traders receive an early warning signal from their brains—a warning that makes them feel uncomfortable and urges them to sell, sell, sell.

"Seeing what's going on in people's brains when they are trading suggests that Buffett was right on target," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at Caltech.  

That is because in their experimental markets, Camerer and his colleagues found two distinct types of activity in the brains of participants—one that made a small fraction of participants nervous and prompted them to sell their experimental shares even as prices were on the rise, and another that was much more common and made traders behave in a greedy way, buying aggressively during the bubble and even after the peak. The lucky few who received the early warning signal got out of the market early, ultimately causing the bubble to burst, and earned the most money. The others displayed what former Federal Reserve chairman Alan Greenspan called "irrational exuberance" and lost their proverbial shirts.

A paper about the experiment and the team's findings appears this week in the journal Proceedings of the National Academy of Sciences. Alec Smith, the lead author on the paper, is a visiting associate at Caltech. Additional coauthors are from the Virginia Tech Carilion Research Institute.

The researchers set up a simple experimental market in which they were able to control the fundamental, or actual, value of a traded risky asset. In each of 16 sessions, about 20 participants were told how an on-screen trading market worked and were given 100 units of experimental currency and six shares of the risky asset. Then, over the course of 50 trading periods, the traders indicated by pressing keyboard buttons whether they wanted to buy, sell, or hold shares at various prices.  

Given the way the experiment was set up, the fundamental price of the risky asset was 14 currency units. Yet in many sessions, the traded price rose well above that—sometimes three to five times as high—creating bubble markets that eventually crashed.

During the experiment, two or three additional subjects per session also participated in the market while having their brains scanned by a functional magnetic resonance imaging (fMRI) machine. In fMRI, blood flow is monitored and used as a proxy for brain activation. If a brain region shows a relatively high level of blood oxygenation during a task, that region is thought to be particularly active.

At the end of the experiment, the researchers first sought to understand the behavioral data—the choices the participants made and the resulting market activity—before analyzing the fMRI scans.

"The first thing we saw was that even in an environment where you don't have squawking heads and all kinds of other information being fed to people, you can get bubbles just through pricing dynamics that occur naturally," says Camerer. This finding is at odds with what some economists have held—that bubbles are rare or are caused by misinformation or hype.

Next, the researchers divided the participants into three categories based on their earnings during their 50 trading periods—low, medium, and high earners. They found that the low earners tended to be momentum buyers who started buying as prices went up and then kept buying even as prices tanked. The middle-of-the-road folks didn't take many risks at all and, as a result, neither made nor lost the most money. And the traders who earned the most bought early and sold when prices were on the rise.

"The high-earning traders are the most interesting people to us," Camerer says. "Emotionally, they have to do something really hard: sell into a rising market. We thought that something must be going on in their brains that gives them an early warning signal."

To reveal what was actually occurring in the brains of the subjects—and the nature of that warning signal—Camerer and his colleagues analyzed the fMRI scans. Using this data, the researchers first looked for an area of the brain that was unusually active when the results screen came up that told participants their outcome for the last trading period. It turned out that a region called the nucleus accumbens (NAcc) lit up at that time in all participants, showing more activity when shares were bought or sold. The NAcc is associated with reward processing—it lights up when people are given expected rewards such as money or juice or a smile, for example. So it was not particularly surprising to see that the NAcc was activated when traders found out how their gambles paid off.

What was surprising, though, was that low earners were very sensitive to activity in the NAcc: when they experienced the most activity in the NAcc, they bought a lot of the risky asset. "That is a correlation we can call irrational exuberance," Camerer says. "Exuberance is the brain signal, and the irrational part is buying so many shares. The people who make the most money have low sensitivity to the same brain signal. Even though they're having the same mental reaction, they're not translating it into buying as aggressively."

Returning to the question of the high earners and their early warning signal, the researchers hypothesized that a part of the brain called the insular cortex, or insula, might be serving as that bellwether. The insula was a good candidate because previous studies had linked it to financial uncertainty and risk aversion. It is also known to reflect negative emotions associated with bodily sensations such as being shocked or smelling something disgusting, or even with feelings of social discomfort like those that come with being treated unfairly or being excluded.

Looking at the brain data of the high earners, the researchers found that insula activity did indeed increase shortly before the traders switched from buying to selling. And again, Camerer notes, "The prices were still going up at that time, so they couldn't be making pessimistic predictions just based on the recent price trend. We think this is a real warning signal."

Meanwhile, in the low earners, insula activity actually decreased, perhaps allowing their irrational exuberance to continue unchecked.  

Read Montague, director of the Human Neuroimaging Laboratory at the Virginia Tech Carilion Research Institute and one of the paper's senior authors, emphasizes the importance of group dynamics, or group thinking, in the study. "Individual human brains are indeed powerful alone, but in groups we know they can build bridges, spacecraft, microscopes, and even economic systems," he says. "This is one of the next frontiers in neuroscience—understanding the social mind."

Additional coauthors on the paper, "Irrational exuberance and neural warning signals during endogenous experimental market bubbles," include Terry Lohrenz and Justin King of Virginia Tech Carilion Research Institute in Roanoke, Virginia. Montague is also a professor at the Wellcome Trust Centre for Neuroimaging at University College London. The work was supported by the National Science Foundation, the Betty and Gordon Moore Foundation, and the Lipper Family Foundation.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Sorting Out Emotions

Evaluating another person's emotions based on facial expressions can sometimes be a complex task. As it turns out, this process isn't so easy for the brain to sort out either. Building on previous studies targeting the amygdala, a region in the brain known to be important for the processing of emotional reactions, a team of researchers from Caltech, Cedars-Sinai Medical Center, and Huntington Memorial Hospital in Pasadena, have found that some brain cells recognize emotions based on the viewer's preconceptions rather than the true emotion being expressed. In other words, it's possible for the brain to be biased. The team was able to record these responses from single neurons using existing electrodes—indicated by the arrows in the MRI image at right—placed in the brains of patients who were being treated for epilepsy. Participants were shown images of partially obscured faces showing either happiness or fear (see secondary image) and were asked to guess the emotion being shown. According to the researchers, the brain responded similarly whether or not the patient guessed the correct emotion.

"These are very exciting findings suggesting that the amygdala doesn't just respond to what we see out there in the world, but rather to what we imagine or believe about the world," says Ralph Adolphs, the Bren Professor of Psychology and Neuroscience at Caltech and coauthor of a paper that discusses the team's study.  "It's particularly interesting because the amygdala has been linked to so many psychiatric diseases, ranging from anxiety to depression to autism.  All of those diseases are about experiences happening in the minds of the patients, rather than objective facts about the world that everyone shares."

What's next?  Says Shuo Wang, a postdoctoral fellow at Caltech and first author of the paper,  "Of course, the amygdala doesn't accomplish anything by itself.  What we need to know next is what happens elsewhere in the brain,  so we need to record not only from the amygdala, but also from other brain regions with which the amygdala is connected."

The paper, which also included Caltech postdoctoral scholar Oana Tudusciuc, was published on June 30 in the Early Edition of the Proceedings of the National Academy of Science.

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Kip Thorne Discusses First Discovery of Thorne-Żytkow Object

In 1975, Kip Thorne (BS '62, and the Richard P. Feynman Professor of Theoretical Physics, Emeritus) and then-Caltech postdoctoral fellow Anna Żytkow sought the answer to an intriguing question: Would it be possible to have a star that had a neutron star as its core—that is, a hot, dense star composed entirely of neutrons within another more traditional star? Thorne and Żytkow predicted that if a neutron star were at the core of another star, the host star would be a red supergiant—an extremely large, luminous star—and that such red supergiants would have peculiar abundances of elements. Researchers who followed this line of inquiry referred to this hypothetical type of star as a Thorne-Żytkow object (TŻO).

Nearly 40 years later, astronomers believe they may have found such an object: a star labeled HV 2112 and located in the Small Magellanic Cloud, a dwarf galaxy that is a near neighbor of the Milky Way and visible to the naked eye. HV 2112 was identified as a TŻO candidate with the 6.5-meter Magellan Clay telescope on Las Campanas in Chile by Emily Levesque (University of Colorado), Philip Massey (Lowell Observatory; BS '75, MS '75, Caltech), Żytkow (now at the University of Cambridge), and Nidia Morrell (also at the University of Cambridge).

We recently sat down with Thorne to ask how it feels to have astronomers discover something whose existence he postulated decades before.

When you came up with the idea of TŻOs, were you trying to explain anything that had been observed, or was it a simple "what if?" speculation?

It was totally theoretical. We weren't the first people to ask the question either. In the mid-1930s, theoretical physicist George Gamow speculated about these kinds of objects and wondered if even our sun might have a neutron star in its core. That was soon after Caltech's Fritz Zwicky conceived the idea of a neutron star. But Gamow never did anything quantitative with his speculations.

The idea of seriously pursuing what these things might look like was due to Bohdan Paczynski, a superb astrophysicist on the faculty of the University of Warsaw. In the early 1970s, he would shuttle back and forth between Caltech, where he would spend about three months a year, and Warsaw, where he stayed for nine months. He had a real advantage over everybody else during this era when people were trying to understand stellar structure and stellar evolution in depth. Nine months of the year he didn't have a computer available, so he had to think. Then during the three months he was at Caltech, he could compute.

Paczynski was the leading person in the world in understanding the late stages of the evolution of stars. He suggested to his postdoctoral student Anna Żytkow that she look into this idea of stars with neutron cores, and then Anna easily talked me into joining her on the project, and came to Caltech for a second postdoc. I had the expertise in relativity, and she had a lot better understanding of the astrophysics of stars than I did. So it became a very enjoyable collaboration. For me it was a learning process. As one often does as a professor, I learned from working with a superb postdoc who had key knowledge and experience that I did not have.

What were the properties of TŻOs as you and Żytkow theorized them?

We didn't know in advance what they would look like, though we thought—correctly it turns out—that they would be red supergiants. Our calculations showed that if the star was heavier than about 11 suns, it would have a shell of burning material around the neutron core, a shell that would generate new elements as it burned. Convection, the circulation of hot gas inside the star, would reach right into the burning shell and carry the products of burning all the way to the surface of the star long before the burning was complete. This convection, reaching into a burning shell, was unlike anything seen in any other kind of star.

Is this how you get different elements in TŻOs than those ordinarily seen on the surface of a star?

That's right. We could see that the elements produced would be peculiar, but our calculations were not good enough to make this quantitative. In the 1990s, a graduate student of mine named Garrett Biehle (PhD '93) worked out, with considerable reliability, what the products of nuclear burning would be. He predicted unusually large amounts of rubidium and molybdenum; and a bit later Philipp Podsiadlowski, Robert Cannon, and Martin Rees at the University of Cambridge showed there would also be a lot of lithium.

It is excess rubidium, molybdenum, and lithium that Żytkow and her colleagues have found in HV 2112.

Does that mean TŻOs are fairly easy to recognize with a spectrographic analysis, which can determine the elements of a star?

No, it's not easy! TŻOs should have a unique signature, but these objects would be pretty rare.

What are the circumstances in which a TŻO would develop?

As far as we understand it, the most likely way these things form is that a neutron star cannibalizes the core of a companion star. You have a neutron star orbiting around a companion star, and they spiral together, and the neutron star takes up residence in the core of the companion. Bohdan Paczynski and Jerry Ostriker, an astrophysicist at Princeton University, speculated this would happen way back in 1975 while I was doing my original work with Żytkow, and subsequent analyses have confirmed it.

The other way a TŻO might develop is from the supernova explosion that makes the neutron star. In a supernova that creates a neutron star, matter is ejected in an asymmetric way. Occasionally these kicks resulting from the ejection of matter will drive the neutron star into the interior of the companion star, according to analyses by Peter Leonard and Jack Hills at Los Alamos, and Rachel Dewey at JPL.

Is there anything other than peculiar element abundances that would indicate a TŻO? Does it look different from other red supergiant stars?

TŻOs are the most highly luminous of red supergiant stars but not so much so that you could pick them out from the crowd: all red supergiants are very bright. I think the only way to identify them is through these element abundances.

Are you convinced that this star discovered by Żytkow and her colleagues is a TŻO?

The evidence that HV 2112 is a TŻO is strong but not ironclad. Certainly it's by far the best candidate for a TŻO that anyone has seen, but additional confirmation is needed.

How does it feel to hear that something you imagined on paper so long ago has been seen out in the universe?

It's certainly satisfying. It's an area of astrophysics that I dipped into briefly and then left. That's one of the lovely things about being a theorist: you can dip into a huge number of different areas. One of the things I've most enjoyed about my career is moving from one area to another and learning new astrophysics. Anna Żytkow deserves the lion's share of the credit for this finding. She pushed very hard on observers to get some good telescope time. It's her tenacity more than anything else that made this happen.

What are you working on now that you are retired?

I'm an executive producer of the film Interstellar, directed by Christopher Nolan and based in part on the science I've done during my Caltech career. Greater secrecy surrounds Interstellar than most any movie that's been done in Hollywood. I'm not allowed to talk about it, but let's just say that I've been spending a lot of my time on it in the last year. And I've recently finished writing a book about the science in Interstellar.

The other major project I'm wrapping up is a textbook that I've written with Roger Blandford [formerly a professor at Caltech; now on the faculty at Stanford]: Modern Classical Physics. It's based on a course that Roger or I taught every other year at Caltech from 1980 until my retirement in 2009. It covers fluid mechanics, elasticity, optics, statistical physics, plasma physics, and curved space-time—that is, everything in classical physics that any PhD physicist should be familiar with, but usually isn't. This week we delivered the manuscript to the copy editor. After 34 years of developing this monumental treatise/textbook, it's quite a relief.

I'm also working with some of my former students and postdocs on trying to understand the nonlinear dynamics of curved space-time. For this we gain insights from numerical relativity: simulations of the collisions of spinning black holes. But I've had to shelve this work for the past half year due to the pressures of the movie and books. I hope to return to it soon.

Writer: 
Cynthia Eller
Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Watching Nanoscale Fluids Flow

At the nanoscale, where objects are measured in billionths of meters and events transpire in trillionths of seconds, things do not always behave as our experiences with the macro-world might lead us to expect. Water, for example, seems to flow much faster within carbon nanotubes than classical physics says should be possible. Now imagine trying to capture movies of these almost imperceptibly small nanoscale movements.

Researchers at Caltech now have done just that by applying a new imaging technique called four-dimensional (4D) electron microscopy to the nanofluid dynamics problem. In a paper appearing in the June 27 issue of Science, Ahmed Zewail, the Linus Pauling Professor of Chemistry and professor of physics, and Ulrich Lorenz, a postdoctoral scholar in chemistry, describe how they visualized and monitored the flow of molten lead within a single zinc oxide nanotube in real time and space.

The 4D microscopy technique was developed in the Physical Biology Center for Ultrafast Science and Technology at Caltech, created and directed by Zewail to advance understanding of the fundamental physics of chemical and biological behavior. 

In 4D microscopy, a stream of ultra-fast-moving electrons bombards a sample in a carefully timed manner. Each electron scatters off the sample, producing a still image that represents a single moment, just a femtosecond—or a millionth of a billionth of a second—in duration. Millions of the still images can then be stitched together to produce a digital movie of nanoscale motion.

In the new work, Lorenz and Zewail used single laser pulses to melt the lead cores of individual zinc oxide nanotubes and then, using 4D microscopy, captured how the hot pressurized liquid moved within the tubes—sometimes splitting into multiple segments, producing tiny droplets on the outside of the tube, or causing the tubes to break. Lorenz and Zewail also measured the friction experienced by the liquid in the nanotube.

"These observations are particularly significant because visualizing the behavior of fluids at the nanoscale is essential to our understanding of how materials and biological channels effectively transport liquids," says Zewail. In 1999, Zewail won the Nobel Prize for his development of femtosecond chemistry.

The paper is titled "Observing liquid flow in nanotubes by 4D electron microscopy." The work was supported by the National Science Foundation, the Air Force Office of Scientific Research, and the Gordon and Betty Moore Foundation. Lorenz was partially supported by a fellowship from the Swiss National Science Foundation.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news