Probing the Transforming World of Neutrinos

Every second, trillions of neutrinos travel through your body unnoticed. Neutrinos are among the most abundant particles in the universe, but they are difficult to study because they very rarely interact with matter. To find traces of these elusive particles, researchers from Caltech have collaborated with 39 other institutions to build a 14,000-ton detector the size of two basketball courts called NuMI Off-Axis Electron Neutrino Appearance, or NOvA. The experiment, located in northern Minnesota, began full operation in November 2014 and published its first results in Physical Review Letters this month.

The experiment aims to observe neutrino oscillations—or the conversion of one type of neutrino into another—to learn about the subatomic composition of the universe. There are three different types, or "flavors," of neutrinos—muon-, tau-, and electron-type. The NOvA experiment has made successful detections of the transformation of muon-type neutrinos into electron-type neutrinos. Discovering more about the frequency and nature of neutrino oscillations is an important step to determining the masses of different types of neutrinos, a crucial unknown component in every cosmological model of the universe.

Though neutrinos rarely interact with matter, one in every 10 billion neutrinos that passes through the detector will interact with an atom in the detector. To observe these collisions, a beam of neutrinos 500 miles away at Fermilab in Chicago is fired every 1.3 seconds in a 10-microsecond burst at the detector. The detector is made up of 344,000 cells, each like a pixel in a camera and each filled with a liquid scintillator, a chemical that emits light when electrically charged particles pass through it. When a neutrino smashes into an atom of this liquid—an event estimated to happen once for every 10 billion neutrinos that pass through—it produces a distinctive spray of particles, such as electrons, muons, or protons. When these particles pass through a cell, fluorescent chemicals light up the cell, allowing scientists can track the paths of the particles from the collision.


A muon-type neutrino interaction in the NOvA detector, as viewed by the vertically oriented cells (top panel) and horizontally oriented cells (bottom panel). By using cells oriented both ways, researchers can build a three-dimensional version of the event. The neutrino entered from the left in this image, from the direction of Fermilab. Each colored pixel represents an individual detector cell, with warmer colors corresponding to more observed light and thus more energy deposited by traversing particles. The muon produced in this collision left the long, tell-tale line of active cells along its path. Other particles emanating from the interaction point are also visible. Credit: NOvA Collaboration

"Each type of neutrino leaves a particular signature when it interacts in the detector," says Ryan Patterson (BS '00), an assistant professor of physics and the leader of NOvA's data-analysis team. "Fermilab makes a stream of almost exclusively muon-type neutrinos. If one of these hits something in our detector, we will see the signatures of a particle called a muon. However, if an electron-type neutrino interacts in our detector, we see the signatures of an electron."

Because the beam of neutrinos coming from Fermilab is designed to produce almost entirely muon-type neutrinos, there is a high probability that any signatures of electron-type neutrinos come from a muon-type neutrino that has undergone a transforming oscillation.

Researchers estimated that if oscillations were not occurring, 201 muon-type neutrinos would have been measured over the initial data-taking period, which ended in May 2015. But during this first data-collection run, NOvA saw the signatures of only 33 muon-type neutrinos—suggesting that muon-type neutrinos were disappearing because some had changed type. The detector also measured six electron-type neutrinos, when only one of this type would be expected if oscillations were not occurring.

"We see a large rate for this transition, much higher than it needed to be, given our current knowledge," Patterson says. "These initial data are giving us exciting clues already about the spectrum of neutrino masses."

The Caltech NOvA team led the research and development on the detector elements.  The goal was to make each detector cell sensitive enough to identify the faint particle signals over background noise. The team designed the individual detector elements to operate at -15 degrees Celsius to keep noise—aberrant vibrations and other signals in the data—at a minimum, and also built structures to remove the condensation that can occur at such low temperatures. By the end of construction in 2014, all 12,000 detector arrays, each serving 32 cells, had been built at Caltech.

"The spatial resolution on a detector of this size is unprecedented," Patterson says. "The whole detector is highly 'active'—which means that most of it is actually capable of detecting particles. We have tried to minimize the amount of 'dead' material, like support structures. Additionally, although the different types of neutrinos leave different signatures, these signatures can look similar—so we need as much discrimination power as we can get."

Discovering more about the nature of neutrino oscillations gives important insights into the subatomic world and the evolution of the universe.

"We know that two of the neutrinos are similar in mass, and that a third has a rather different mass from the other two. But we still do not know whether this separated mass is larger or smaller than the other two," Patterson says. Through precise study of neutrino oscillations with NOvA, researchers hope to solve this mass-ordering mystery. "The neutrino mass ordering has connections throughout physics, from the growth of structure in the universe to the behavior of particles at inaccessibly high energies," he says, with NOvA unique among operating experiments because of its sensitivity to this mass ordering.

In the future, researchers at NOvA plan to determine if antineutrinos oscillate at the same rate as neutrinos—that is, to see if neutrinos and antineutrinos behave symmetrically. If NOvA finds that they do not, this discovery could, in turn, help reveal why today the amount of matter in the universe is so much greater than the amount of antimatter, whereas in the early universe, the proportions of the two were balanced.

"These first results demonstrate that NOvA is operating beautifully and that we have a rich physics program ahead of us," Patterson says.

Home Page Title: 
The Transforming World of Neutrinos
Listing Title: 
Probing the Transforming World of Neutrinos
Writer: 
Exclude from News Hub: 
No
Short Title: 
The Transforming World of Neutrinos
News Type: 
Research News
Teaser Image: 
Exclude from Home Page: 

Is Risk-Taking Behavior Contagious?

Why do we sometimes decide to take risks and other times choose to play it safe? In a new study, Caltech researchers explored the neural mechanisms of one possible explanation: a contagion effect.

The work is described in the March 21 online early edition of the Proceedings of the National Academy of Sciences.

In the study led by John O'Doherty, professor of psychology and director of the Caltech Brain Imaging Center, 24 volunteers repeatedly participated in three types of trials: a "Self" trial, in which the participants were asked to choose between taking a guaranteed $10 or making a risky gamble with a potentially higher payoff; an "Observe" trial, in which the participants observed the risk-taking behavior of a peer (in the trial, this meant a computer algorithm trained to behave like a peer), allowing the participants to learn how often the peer takes a risk; and a "Predict" trial, in which the participants were asked to predict the risk-taking tendencies of an observed peer, earning a cash prize for a correct prediction. Notably in these trials the participants did not observe gamble outcomes, preventing them from further learning about gambles.

O'Doherty and his colleagues found that the participants were much more likely to make the gamble for more money in the "Self" trial when they had previously observed a risk-taking peer in the "Observe" trial. The researchers noticed that after the subjects observed the actions of a peer, their preferences for risk-taking or risk-averse behaviors began to reflect those of the observed peer—a so-called contagion effect. "By observing others behaving in a risk-seeking or risk-averse fashion, we become in turn more or less prone to risky behavior," says Shinsuke Suzuki, a postdoctoral scholar in neuroscience and first author of the study.

To look for indications of risk-taking behavior in specific brain regions of subjects participating in the trials, the Caltech team used functional magnetic resonance imaging (fMRI), which detects brain activity.

By combining computational modeling of the data from the "Self" behavioral trials with the fMRI data, the researchers determined that a region of the brain called the caudate nucleus responds to the degree of risk in the gamble; for example, a riskier gamble resulted in a higher level of observed activity in the caudate nucleus, while a less risky gamble resulted in a lower level of activity. Additionally, the more likely the participants were to make a gamble, the more sensitively activity in the caudate nucleus responded to risk. "This showed that, in addition to the behavioral shift, the neural processing of risk in the caudate is also altered. Also, both the behavioral and neural responses to taking risks can be changed through passively observing the behavior of others," Suzuki says.

The "Predict" behavioral trials were designed to test whether a participant could also learn and predict the risk-taking preferences of an observed peer. Indeed, the researchers found that the participants could successfully predict these preferences—with the learning process occurring even faster if the participant's risk-taking preferences mirrored those of the peer. Furthermore, the fMRI data collected during the "Observe" trial showed that a part of the brain called the dorsolateral prefrontal cortex (dlPFC) was active when participants were learning about others' attitudes toward risk.

The researchers also found differences among participants in functional connectivity between the caudate nucleus and the dlPFC that were related to the strength of the contagion effect—meaning that these two brain regions somehow work together to make a person more or less susceptible to the contagiousness of risk-taking behavior. The work provides an explanation of how our own risk-taking behaviors can be influenced simply by observing the behaviors of others. This study, Suzuki says, is the first to demonstrate that a neural response to risk is altered in response to changes in risk-taking behavior.

"Our findings provide insight into how observation of others' risky behavior affects our own attitude toward risk," Suzuki says—which might help explain the susceptibility of people to risky behavior when observing others behaving in a risky manner, such as in adolescent peer groups. In addition, the findings might offer insight into the formation and collapse of financial bubbles. "The tendency of financial markets to collectively veer from bull markets to bear markets and back again could arise, in part, due to the contagion of observing the risk-seeking or risk-averse investment behaviors of other market participants," he says.

"The findings reported in this paper are part of a broader research goal at Caltech, in which we are trying to understand how the brain can learn from other people and make decisions in a social context," O'Doherty says. "Ultimately, if we can understand how our brains function in social situations, this should also enable us to better understand how brain circuits can go awry, shedding light on social anxiety, autism, and other social disorders."

The paper is titled, "Behavioral contagion during learning about another agent's risk-preferences acts on the neural representation of decision-risk." In addition to Suzuki and O'Doherty, other Caltech coauthors include instructional assistant Emily Jensen and visiting associate in finance Peter Bossaerts. The work was funded by a Japan Society for the Promotion of Science Postdoctoral Fellowship for Research Abroad and the Caltech Conte Center for the Neurobiology of Social Decision Making, which is supported by the National Institute of Mental Health.

Home Page Title: 
Is Risk-Taking Behavior Contagious?
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 
Home Page Summary: 
The neural processing of risk in our brain is changed when we observe the risk-taking behaviors of others, Caltech study says.

Nanoparticle-Based Cancer Therapies Shown to Work in Humans

A team of researchers led by Caltech scientists has shown that nanoparticles can function to target tumors while avoiding adjacent healthy tissue in human cancer patients.

"Our work shows that this specificity, as previously demonstrated in preclinical animal studies, can in fact occur in humans," says study leader Mark E. Davis, the Warren and Katharine Schlinger Professor of Chemical Engineering at Caltech. "The ability to target tumors is one of the primary reasons for using nanoparticles as therapeutics to treat solid tumors."

The findings, published online the week of March 21 in the journal Proceedings of the National Academy of Sciences, demonstrate that nanoparticle-based therapies can act as a "precision medicine" for targeting tumors while leaving healthy tissue intact.

In the study, Davis and his colleagues examined gastric tumors from nine human patients both before and after infusion with a drug—camptothecin—that was chemically bound to nanoparticles about 30 nanometers in size.

"Our nanoparticles are so small that if one were to increase the size to that of a soccer ball, the increase in size would be on the same order as going from a soccer ball to the planet Earth," says Davis, who is also a member of the City of Hope Comprehensive Cancer Center in Duarte, California, where the clinical trial was conducted.

The team found that 24 to 48 hours after the nanoparticles were administered, they had localized in the tumor tissues and released their drug cargo, and the drug had had the intended biological effects of inhibiting two proteins that are involved in the progression of the cancer. Equally important, both the nanoparticles and the drug were absent from healthy tissue adjacent to the tumors.

The nanoparticles are designed to be flexible delivery vehicles. "We can attach different drugs to the nanoparticles, and by changing the chemistry of the bond linking the drug to the nanoparticle, we can alter the release rate of the drug to be faster or slower," says Andrew Clark, a graduate student in Davis's lab and the study's first author.

Davis says his team's findings suggest that a phenomenon known as the enhanced permeability and retention (EPR) effect is at work in humans. In the EPR effect, abnormal blood vessels that are "leakier" than normal blood vessels in healthy tissue allow nanoparticles to preferentially concentrate in tumors. Until now, the existence of the EPR effect has been conclusively proven only in animal models of human cancers.

"Our results don't prove the EPR effect in humans, but they are completely consistent with it," Davis says.

The findings could also help pave the way toward more effective cancer drug cocktails that can be tailored to fight specific cancers and that leave patients with fewer side effects.

"Right now, if a doctor wants to use multiple drugs to treat a cancer, they often can't do it because the cumulative toxic effects of the drugs would not be tolerated by the patient," Davis says. "With targeted nanoparticles, you have far fewer side effects, so it is anticipated that a drug combination can be selected based on the biology and medicine rather than the limitations of the drugs."

These nanoparticles are currently being tested in a number of phase-II clinical trials. (Information about trials of the nanoparticles, denoted CRLX101, is available at www.clinicaltrials.gov).

In addition to Davis and Clark, other coauthors on the study, entitled "CRLX101 nanoparticles localize in human tumors and not in adjacent, nonneoplastic tissue after intravenous dosing," include Devin Wiley (MS '11, PhD '13) and Jonathan Zuckerman (PhD '12); Paul Webster of the Oak Crest Institute of Science; Joseph Chao and James Lin at City of Hope; and Yun Yen of Taipei Medical University, who was at City of Hope and a visitor in the Davis lab at the initiation of the clinical trial. The research was supported by grants from the National Cancer Institute and the National Institutes of Health and by Cerulean Pharma Inc. Davis is a consultant to and holds stock in Cerulean Pharma Inc. 

Writer: 
Ker Than
Home Page Title: 
Nanoparticles Target Tumors in Humans
Listing Title: 
Nanoparticle-Based Cancer Therapies Shown to Work in Humans
Writer: 
Exclude from News Hub: 
No
Short Title: 
Nanoparticles Target Tumors
News Type: 
Research News
Exclude from Home Page: 

An Up-Close View of Bacterial "Motors"

Bacteria are the most abundant form of life on Earth, and they are capable of living in diverse habitats ranging from the surface of rocks to the insides of our intestines. Over millennia, these adaptable little organisms have evolved a variety of specialized mechanisms to move themselves through their particular environments. In two recent Caltech studies, researchers used a state-of-the-art imaging technique to capture, for the first time, three-dimensional views of this tiny complicated machinery in bacteria.

"Bacteria are widely considered to be 'simple' cells; however, this assumption is a reflection of our limitations, not theirs," says Grant Jensen, a professor of biophysics and biology at Caltech and an investigator with the Howard Hughes Medical Institute (HHMI). "In the past, we simply didn't have technology that could reveal the full glory of the nanomachines—huge complexes comprising many copies of a dozen or more unique proteins—that carry out sophisticated functions."

Jensen and his colleagues used a technique called electron cryotomography to study the complexity of these cell motility nanomachines. The technique allows them to capture 3-D images of intact cells at macromolecular resolution—specifically, with a resolution that ranges from 2 to 5 nanometers (for comparison, a whole cell can be several thousand nanometers in diameter). First, the cells are instantaneously frozen so that water molecules do not have time to rearrange to form ice crystals; this locks the cells in place without damaging their structure. Then, using a transmission electron microscope, the researchers image the cells from different angles, producing a series of 2-D images that—like a computed tomography, or CT, scan—can be digitally reconstructed into a 3-D picture of the cell's structures. Jensen's laboratory is one of only a few in the entire world that can do this type of imaging.

In a paper published in the March 11 issue of the journal Science, the Caltech team used this technique to analyze the cell motility machinery that involves a structure called the type IVa pilus machine (T4PM). This mechanism allows a bacterium to move through its environment in much the same way that Spider-Man travels between skyscrapers; the T4PM assembles a long fiber (the pilus) that attaches to a surface like a grappling hook and subsequently retracts, thus pulling the cell forward.

Although this method of movement is used by many types of bacteria, including several human pathogens, Jensen and his team used electron cryotomography to visualize this cell motility mechanism in intact Myxococcus xanthus—a type of soil bacterium. The researchers found that the structure is made up of several parts, including a pore on the outer membrane of the cell, four interconnected ring structures, and a stemlike structure. By systematically imaging mutants, each of which lacked one of the 10 T4PM core components, and comparing these mutants with normal M. xanthus cells, they mapped the locations of all 10 T4PM core components, providing insights into pilus assembly, structure, and function.

"In this study, we revealed the beautiful complexity of this machine that may be the strongest motor known in nature. The machine lets M. xanthus, a predatory bacterium, move across a field to form a 'wolf pack' with other M. xanthus cells, and hunt together for other bacteria on which to prey," Jensen says.

Another way that bacteria move about their environment is by employing a flagellum—a long whiplike structure that extends outward from the cell. The flagellum is spun by cellular machinery, creating a sort of propeller that motors the bacterium through a substrate. However, cells that must push through the thick mucus of the intestine, for example, need more powerful versions of these motors, compared to cells that only need enough propeller power to travel through a pool of water.

In a second paper, published in the online early edition of the Proceedings of the National Academy of Sciences (PNAS) on March 14, Jensen and his colleagues again used electron cryotomography to study the differences between these heavy-duty and light-duty versions of the bacterial propeller. The 3-D images they captured showed that the varying levels of propeller power among several different species of bacteria can be explained by structural differences in these tiny motors.

In order for the flagellum to act as a propeller, structures in the cell's motor must apply torque—the force needed to cause an object to rotate—to the flagellum. The researchers found that the high-power motors have additional torque-generating protein complexes that are found at a relatively wide radius from the flagellum. This extra distance provides greater leverage to rotate the flagellum, thus generating greater torque. The strength of the cell's motor was directly correlated with the number of these torque-generating complexes in the cell.

"These two studies establish a technique for solving the complete structures of large macromolecular complexes in situ, or inside intact cells," Jensen says. "Other structure determination methods, such as X-ray crystallography, require complexes to be purified out of cells, resulting in loss of components and possible contamination. On the other hand, traditional 2-D imaging alone doesn't let you see where individual protein pieces fit in the complete structure. Our electron cryotomography technique is a good solution because it can be used to look at the whole cell, providing a complete picture of the architecture and location of these structures."

The work involving the type IVa pilus machinery was published in a Science paper titled "Architecture of the type IVa pilus machine." First author Yi-Wei Chang is a research scientist at Caltech; additional coauthors include collaborators from the Max Planck Institute for Terrestrial Microbiology, in Marburg, Germany, and from the University of Utah. The study was funded by the National Institutes of Health (NIH), HHMI, the Max Planck Society, and the Deutsche Forschungsgemeinschaft.

Work involving the flagellum machinery was published in a PNAS paper titled "Diverse high-torque bacterial flagellar motors assemble wider stator rings using a conserved protein scaffold." Additional coauthors include collaborators from Imperial College London; the University of Texas Southwestern Medical Center; and the University of Wisconsin–Madison. The study was supported by funding from the UK's Biotechnology and Biological Sciences Research Council and from HHMI and NIH.

Home Page Title: 
An Up-Close View of Bacterial "Motors"
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Teaser Image: 
Exclude from Home Page: 

Learning to Program Cellular Memory

What if we could program living cells to do what we would like them to do in the body? Having such control—a major goal of synthetic biology—could allow for the development of cell-based therapies that might one day replace traditional drugs for diseases such as cancer. In order to reach this long-term goal, however, scientists must first learn to program many of the key things that cells do, such as communicate with one another, change their fate to become a particular cell type, and remember the chemical signals they have encountered.

Now a team of researchers led by Caltech biologists Michael Elowitz, Lacramioara Bintu, and John Yong (PhD '15) have taken an important step toward being able to program that kind of cellular memory using tools that cells have evolved naturally. By combining synthetic biology approaches with time-lapse movies that track the behaviors of individual cells, they determined how four members of a class of proteins known as chromatin regulators establish and control a cell's ability to maintain a particular state of gene expression—to remember it—even once the signal that established that state is gone.

The researchers reported their findings in the February 12 issue of the journal Science.

"We took some of the most important chromatin regulators for a test-drive to understand not just how they are used naturally, but also what special capabilities each one provides," says Elowitz, a professor of biology and bioengineering at Caltech and an investigator with the Howard Hughes Medical Institute (HHMI). "We're playing with them to find out what we can get them to do for us."

Rather than relying on a single protein to program all "memories" of gene expression, animal cells use hundreds of different chromatin regulators. These proteins each do basically the same thing—they modify a region of DNA to alter gene expression. That raises the question, why does the cell need all of these different chromatin regulators? Either there is a lot of redundancy built into the system or each regulator actually does something unique. And if the latter is the case, synthetic biologists would like to know how best to use these regulators as tools—how to select the ideal protein to achieve a certain effect or a specific type of cellular memory.

Looking for answers, the researchers turned to an approach that Elowitz calls "build to understand." Rather than starting with a complex process and trying to pick apart its component pieces, the researchers build the targeted biological system in cells from the bottom up, giving themselves a chance to actually watch what happens with each change they introduce.

In this case, that meant sticking different chromatin regulators—four gene-silencing proteins—down onto a specific section of DNA and seeing how each behaved. In order to do that the researchers engineered cells so that adding a small molecule would cause one of the gene-silencing regulators to bind to DNA near a particular gene that codes for a fluorescent protein. By tracking fluorescence in individual cells, the researchers could readily determine whether the regulator had turned off the gene. The researchers could also release the regulator from the DNA and see how long the gene remembered its effect.

Although there are hundreds of chromatin regulators, they can be categorized into about a dozen broader classes. For this study, the researchers tested regulators from four biochemically diverse classes.

"We tried a variety to see if different ones give you different types of behavior," explains Bintu. "It turns out they do."

For a month at a time, the researchers used microscopy or flow cytometry to observe the living cells, using cell-tracking software that they wrote and time-lapse movies to watch individual cells grow and divide. In some cases, after a regulator was released, the cells and their daughter cells remained dark for days and then lit back up, indicating that they remembered the modification transiently. In other cases, the cells never lit back up, indicating more permanent memory.

After modification, the genes were always in one of three states—"awake" and actively making protein, "asleep" and inactive but able to wake up in a matter of days, or "in a coma" and unable to be awakened within 30 days. Within an individual cell, the genes were always either completely on or off.

That led the researchers to the surprising finding that the regulators control not the level or degree of expression of a particular gene in an individual cell, but rather how many cells in a population have that gene on or off.

"You're controlling the probability that something is on or off," says Elowitz. "We think that this is something that's very useful generally in a multicellular organism—that in many cases, the organism may want to tell cells, 'I just want 30 percent of you to differentiate. You don't all need to do it.' This chromatin regulation system seems ready-made for orders like those."

In addition, the researchers found that the type of memory imparted by each of the four chromatin regulators was different. One produced permanent memory, turning off the gene and putting a fraction of cells into a coma for the full 30 days. One yielded short-term memory, with the cells immediately waking up. "The really interesting thing we found is that some of the regulators give this type of hybrid memory where some of the cells awaken while a fraction of the cells remain in a deep coma," says Bintu. "How many are in the coma depends on how long you gave the signal—how long the chromatin regulator stayed attached."

Going forward, the group plans to study additional chromatin regulators in the same manner, developing a better sense of the many ways they are used in the cell and also how they might work in combination. In the longer term they want to put these proteins together with other cellular components and begin programming more complex developmental behavior in synthetic circuits.

"This is a step toward realizing this emerging vision of programmable cell-based therapies," says Elowitz. "But we are also answering more basic research questions. We see these as two sides of the same coin. We're not going to be able to program cells effectively until we understand what capabilities their core pathways provide. "

Additional Caltech authors on the paper, "Dynamics of epigenetic regulation at the single-cell level," include Yaron E. Antebi and Kayla McCue (BS '15). Yasuhiro Kazuki, Narumi Uno, and Mitsuo Oshimura of Tottori University in Japan are also coauthors. The work was supported by the Defense Advanced Research Projects Agency, the Human Frontier Science Program, the Jane Coffin Childs Memorial Fund for Medical Research, the Beckman Institute at Caltech, the Burroughs Wellcome Fund, and HHMI.

Writer: 
Kimm Fesenmaier
Home Page Title: 
Learning to Program Cellular Memory
Listing Title: 
Learning to Program Cellular Memory
Writer: 
Exclude from News Hub: 
No
Short Title: 
Programming Cellular Memory
News Type: 
Research News
Exclude from Home Page: 
Home Page Summary: 
Combining synthetic biology approaches with time-lapse movies, biologists have determined how some proteins shape a cell's ability to remember signals.

Experimental Economics: Results You Can Trust

Reproducibility is an important measure of validity in all fields of experimental science. If researcher A publishes a particular scientific result from his laboratory, researcher B should be able to follow the same protocol and achieve the same result in her laboratory. However, in recent years many results in a variety of disciplines have been questioned for their lack of reproducibility. A new study suggests that published results from experimental economics—a field pioneered at Caltech—are better than average when it comes to reproducibility.

The work was published in the March 3 online issue of the journal Science.

"Trying to reproduce previous results is not glamorous or creative, so it is rarely done. But being able to get the same result over and over is part of the definition of what makes knowledge scientific," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at Caltech and lead author on the paper.

The study was based on a previous method used to assess the replication of psychology experiments. In the earlier technique, called the reproducibility project psychology (RPP), researchers replicated 100 original studies published in three of the top journals in psychology—and found that although 97 percent of the original studies reported so-called "positive findings" (meaning a significant change compared to control conditions), such positive findings were reliably reproduced only 36 percent of the time.

Using this same technique, Camerer and his colleagues reproduced 18 laboratory experimental papers published in two top-tier economics journals between 2011 and 2014. Eleven of the 18—roughly 61 percent—showed a "significant effect in the same direction as in the original study." The researchers also found that the sample size and p-values—a standard measure of statistical confidence—of the original studies were good predictors for the success of replication, meaning they could serve as good indicators for the reliability of results in future experiments.

"Replicability has become a major issue in many sciences over the past few years, with often low replication rates," says paper coauthor Juergen Huber of the University of Innsbruck. "The rate we report for experimental economics is the highest we are aware of for any field."

The authors suggest that there are some methodological research practices in laboratory experimental economics that contribute to the good replication success. "It seems that the culture established in experimental economics—incentivizing subjects, publication of the experimental procedure and instructions, no deception—ensures reliable results. This is very encouraging given that it is a very young discipline," says Michael Kirchler, another coauthor and collaborator from the University of Innsbruck.

"As a journal editor myself, we are always curious whether experimental results will replicate across populations and cultures, and these results from multiple countries are really reassuring," says coauthor Teck-Hua Ho from the National University of Singapore.

Coauthor Magnus Johannesson from the Stockholm School of Economics adds, "It is extremely important to investigate to what extent we can trust published scientific findings and to implement institutions that promote scientific reproducibility."

"For the past half century, Caltech has been a leader in the development of social science experimental methods. It is no surprise that Caltech scholars are part of a group that use replication studies to demonstrate the validity of these methods," says Jean-Laurent Rosenthal, the Rea A. and Lela G. Axline Professor of Business Economics and chair of the Division of the Humanities and Social Sciences at Caltech.

The work was published in a paper titled, "Evaluating Replicability of Laboratory Experiments in Economics." Other coauthors are: Taisuke Imai and Gideon Nave from Caltech; Johan Almenberg from Sveriges Riksbank in Stockholm; Anna Dreber, Eskil Forsell, Adam Altmejd, Emma Heikensten, and Siri Isaksson from the Stockholm School of Economics; Taizan Chan and Hang Wu from the National University of Singapore; Felix Holzmeister and Michael Razen from the University of Innsbruck; and Thomas Pfeiffer from the New Zealand Institute for Advanced Study.

The study was funded by the Austrian Science Fund, the Austrian National Bank, the Behavioral and Neuroeconomics Discovery Fund, the Jan Wallander and Tom Hedelius Foundation, the Knut and Alice Wallenberg Foundation, the Swedish Foundation For Humanities and Social Sciences, and the Sloan Foundation.

Home Page Title: 
Experimental Economics: Results You Can Trust
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 
Home Page Summary: 
A study led by Caltech's Colin Camerer reproducing experimental economics studies finds that published results in this field are actually quite reliable.

JPL News: Pulsar Web Could Detect Low-Frequency Gravitational Waves

The recent detection of gravitational waves by the Laser Interferometer Gravitational-Wave Observatory (LIGO) came from two black holes, each about 30 times the mass of our sun, merging into one. Gravitational waves span a wide range of frequencies that require different technologies to detect. A new study from the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) has shown that low-frequency gravitational waves could soon be detectable by existing radio telescopes.

Read the full story from JPL News

Home Page Title: 
JPL News: Pulsar Web Could Detect Low-Frequency Gravitational Waves
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 

Studying Memory's 'Ripples'

Caltech neuroscientists have looked inside brain cells as they undergo the intense bursts of neural activity known as "ripples" that are thought to underlie memory formation. 

During ripples, a small fraction of brain cells, or neurons, fire synchronously in area CA1, a part of the hippocampus that is thought to be an important relay station for memories. "During a ripple, about 10 percent of the neurons in CA1 are activated, and these active neurons all fire within a tenth of a second," says Caltech graduate student Brad Hulse. "Two big questions have been: How do the remaining 90 percent of CA1 neurons stay quiet? And what is synchronizing the firing of the active neurons?"

In a new study, published online on February 17 in the journal Neuron, Hulse and his colleagues used a novel approach to show how the combination of excitatory and inhibitory inputs to CA1 work together to synchronize the firing of active neurons while keeping most neurons silent during ripples.

"For a long time, people studied these events by placing an electrode outside of a cluster of neurons. These extracellular recordings can tell you about the output of a group of brain cells, but they tell you very little about the inputs they're receiving," says study coauthor and Caltech research scientist Evgueniy Lubenov.

The Caltech scientists combined extracellular recording with a technique to look inside a neuron during ripples. They used fine glass pipettes with tips thinner than a tenth of the width of a human hair to measure directly the voltage difference, or "electrical potential," across the cellular membrane of individual neurons in awake mice.

Employing these two techniques in tandem allowed the scientists to monitor the activity inside a single neuron while still observing how the larger network was behaving. This in turn enabled them to piece together how excitatory inputs from CA3, a hippocampal region where memories are formed, affect the output of brain cells called pyramidal neurons in CA1. These neurons are important for transferring newly coded memories to other brain areas such as the neocortex for safekeeping and long-term storage. Ripples are thought to be the mechanism by which this transfer occurs.

The team discovered that the membrane potential of CA1 pyramidal cells increases during ripples. Surprisingly, this increase is relatively constant and independent of the strength of the input from CA3. For this to be the case, the direct excitation from CA3 must be balanced by proportional inhibition. The source of this inhibition is presumed to be a class of brain cells called feedforward interneurons, which receive direct inputs from CA3 and inhibit CA1 pyramidal cells.

"There seems to be a circuit mechanism that balances excitation and inhibition, so that for most neurons, these two forces cancel out," says study leader Thanos Siapas, professor of computation and neural systems at Caltech.

Without a balanced inhibition, all of the neurons in CA1 could fire when the excitatory input is large enough. "This could cause runaway excitation and possibly trigger a seizure," says Hulse, who is the first author of the new study.

The team's finding explains why most CA1 pyramidal neurons remain silent during ripples, but it raises two important questions: Why do any neurons fire at all? And what controls the precise timing of those that do fire?

The Caltech researchers found that active neurons receive a much stronger excitatory input from CA3 than silent neurons do—one that is large enough to overcome the balancing inhibition. This large excitation originates from CA3 neurons with particularly strong connections to the active CA1 neurons. These connections are believed to be modified during behavior to encode memories.

Hence, it is the specific identity of CA3 neurons, rather than their sheer number, that is responsible for making CA1 neurons fire, the researchers say. This system might seem overly complex and redundant, but the end result is a flexible circuit—an ever-changing mosaic of active and inactive pyramidal neurons. "It's a shifting mosaic, but it's one that is dependent on the organism's memories and experience," Siapas says.

How do ripples exert their influence on the rest of the brain? The membrane potential of each neuron oscillates very rapidly during ripples to synchronize the firing of cells to within a few thousandths of a second. "By coordinating their activities, the CA1 neurons are maximizing the impact of their output on downstream areas of the brain. The overall effect is more powerful than if each neuron fired independently," Lubenov says. "It is the difference between clapping independently or in unison with others at a concert. The effect in the latter case is stronger, even with the same number of people applauding."

Neuroscientists previously thought that these fast oscillations were generated by rhythmic firing of inhibitory neurons, but the Caltech team showed that this cannot be the whole story. "Our experiments suggest that it is the interplay between rhythmic excitation and inhibition that drives these fast oscillations," Hulse says.

The paper, "Membrane Potential Dynamics of CA1 Pyramidal Neurons during Hippocampal Ripples in Awake Mice," is also coauthored by Laurent C. Moreaux, a research scientist at Caltech. Funding for the work was provided by the Mathers Foundation, the Gordon and Betty Moore Foundation, the National Institutes of Health, and the National Science Foundation. 

Writer: 
Ker Than
Home Page Title: 
Studying Memory's 'Ripples'
Listing Title: 
Studying Memory's 'Ripples'
Writer: 
Exclude from News Hub: 
No
Short Title: 
Studying Memory's 'Ripples'
News Type: 
Research News
Exclude from Home Page: 
Home Page Summary: 
Neuroscientists look inside brain cells undergoing the bursts of activity, or "ripples," that underlie memory formation

Counting Molecules with an Ordinary Cell Phone

Diagnostic health care is often restricted in areas with limited resources, because the procedures required to detect many of the molecular markers that can diagnose diseases are too complex or expensive to be used outside of a central laboratory. Researchers in the lab of Rustem Ismagilov, Caltech's Ethel Wilson Bowles and Robert Bowles Professor of Chemistry and Chemical Engineering and director of the Jacobs Institute for Molecular Engineering for Medicine, are inventing new technologies to help bring emerging diagnostic capabilities out of laboratories and to the point of care. Among the important requirements for such diagnostic devices is that the results—or readouts—be robust against a variety of environmental conditions and user errors.

To address the need for a robust readout system for quantitative diagnostics, researchers in the Ismagilov lab have invented a new visual readout method that uses analytical chemistries and image processing to provide unambiguous quantification of single nucleic-acid molecules that can be performed by any cell-phone camera.

The visual readout method is described and validated using RNA from the hepatitis C virus—HCV RNA—in a paper in the February 22 issue of the journal ACS Nano.

The work utilizes a microfluidic technology called SlipChip, which was invented in the Ismagilov lab several years ago. A SlipChip serves as a portable lab-on-a-chip and can be used to quantify concentrations of single molecules. Each SlipChip encodes a complex program for isolating single molecules (such as DNA or RNA) along with chemical reactants in nanoliter-sized wells. The program also controls the complex reactions in each well: the chip consists of two plates that move—or "slip"—relative to one another, with each "slip" joining or separating the hundreds or even thousands of tiny wells, either bringing reactants and molecules into contact or isolating them. The architecture of the chip enables the user to have complete control over these chemical reactions and can prevent contamination, making it an ideal platform for a user-friendly, robust diagnostic device.

The new visual readout method builds upon this SlipChip platform. Special indicator chemistries are integrated into the wells of the SlipChip device. After an amplification reaction—a reaction that multiplies nucleic-acid molecules—wells change color depending on whether the reaction in it was positive or negative. For example, if a SlipChip is being used to count HCV RNA molecules in a sample, a well containing an RNA molecule that amplified during the reaction would turn blue; whereas a well lacking an RNA molecule would remain purple.

To read the result, a user simply takes a picture of the entire SlipChip using any camera phone. Then the photo is processed using a ratiometric approach that transforms the colors detected by the camera's sensor into an unambiguous readout of positives and negatives.

Previous SlipChip technologies utilized a chemical that would fluoresce when a reaction took place within a well. But those readouts can be too subtle for detection by a common cell-phone camera or can require specific lighting conditions. The new method provides guidelines for selecting indicators that yield color changes compatible with the color sensitivities of phone cameras, and the ratiometric processing removes the need for a user to distinguish colors by sight.

"The readout process we developed can be used with any cell-phone camera," says Jesus Rodriguez-Manzano, a postdoctoral scholar in chemical engineering and one of two first authors on the paper. "It is rapid, automated, and doesn't require counting or visual interpretation, so the results can be read by anyone—even users who are color blind or working under poor lighting conditions. This robustness makes our visual readout method appropriate for integration with devices used in any setting, including at the point of care in limited-resource settings. This is critical because the need for highly sensitive diagnostics is greatest in such regions."

The paper is titled "Reading Out Single-Molecule Digital RNA and DNA Isothermal Amplification in Nanoliter Volumes with Unmodified Camera Phones." In addition to Rodriguez-Manzano, Mikhail Karymov is also a first author. Other Caltech coauthors include Stefano Begolo, David Selck, Dmitriy Zhukov, and Erik Jue. The work was funded by grants from the Defense Advanced Research Projects Agency, the National Institutes of Health, and an Innovation in Regulatory Science Award from the Burroughs Wellcome Fund. Microfluidic technologies developed by Ismagilov's group have been licensed to Emerald BioStructures, RainDance Technologies, and SlipChip Corp., of which Ismagilov is a founder.

Home Page Title: 
Counting Molecules with an Ordinary Cell Phone
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 
Home Page Summary: 
The new visual readout method to count individual nucleic-acid molecules within a sample can be performed by any cell-phone camera.

A New Twist on the History of Life

The idea that the wholesale relocation of Earth's continents 520 million years ago, also known as "true polar wander," coincided with a burst of animal speciation in the fossil record dates back almost 20 years to an original hypothesis by Joseph Kirschvink (BS, MS '75), Caltech's Nico and Marilyn Van Wingen Professor of Geobiology, and his colleagues. For more than a century, paleontologists including Charles Darwin have debated whether the so-called Cambrian explosion—a rapid period of species diversification that began around 542 million years ago—was the equivalent of an evolutionary "big bang" of biological innovation, or just an artifact of the incomplete fossil record.

In a new study published in the December issue of the American Journal of Science, a team of researchers including Kirschvink and Ross Mitchell, a postdoctoral scholar in geology at Caltech, describes a new model showing that during the proposed Cambrian true polar wander event, most continents would have moved toward the equator instead of toward the poles.

"It's long been observed that biological diversity is highest in the tropics, where nutrients and energy tend to be abundant," says Kirschvink. "One of the side effects of true polar wander is that sea level rises near the equator but falls near the poles, so the equatorial migration of most Cambrian land masses would have enhanced diversification into previously lower-diversity environments."

Using a model they developed, the team simulated the pattern of continental migration during the Cambrian and found that their results can explain the distribution of Cambrian fossils.

"Our model provides an explanation for why the fossil record looks the way it does, with many Cambrian fossil groups on some continents but few on others," says study coauthor Tim Raub (BS, MS '02), a lecturer at the University of St. Andrews in Scotland.

"The same sea-level rise which flooded those continents that shifted to the tropics and opened new ecological niches for faster speciation also led to more fossil preservation," Mitchell says. "In contrast, the few areas that shifted to the poles became less biologically diverse and also lost rock volume to erosion following sea-level drops due to true polar wander."

The scientists say their new findings could help resolve the debate started so long ago by Darwin. If their theory is correct, the Cambrian explosion is both a true and dramatic pulse of biological innovation and an expression of preferentially preserved shells on selectively submerged continental margins capable of containing fossils.

Funding for the study was provided by the National Science Foundation.

Writer: 
Ker Than
Home Page Title: 
NOT FOR HOME PAGE
Listing Title: 
A New Twist on the History of Life
Writer: 
Exclude from News Hub: 
No
Short Title: 
New Twist on History of Life
News Type: 
Research News
Exclude from Home Page: 

Pages