Lopsided Star Explosion Holds the Key to Other Supernova Mysteries

New observations of a recently exploded star are confirming supercomputer model predictions made at Caltech that the deaths of stellar giants are lopsided affairs in which debris and the stars' cores hurtle off in opposite directions.

While observing the remnant of supernova (SN) 1987A, NASA's Nuclear Spectroscopic Telescope Array, or NuSTAR, recently detected the unique energy signature of titanium-44, a radioactive version of titanium that is produced during the early stages of a particular type of star explosion, called a Type II, or core-collapse supernova.

"Titanium-44 is unstable. When it decays and turns into calcium, it emits gamma rays at a specific energy, which NuSTAR can detect," says Fiona Harrison, the Benjamin M. Rosen Professor of Physics at Caltech, and NuSTAR's principal investigator.

By analyzing direction-dependent frequency changes—or Doppler shifts—of energy from titanium-44, Harrison and her team discovered that most of the material is moving away from NuSTAR. The finding, detailed in the May 8 issue of the journal Science, is the best proof yet that the mechanism that triggers Type II supernovae is inherently lopsided.

NuSTAR recently created detailed titanium-44 maps of another supernova remnant, called Cassiopeia A, and there too it found signs of an asymmetrical explosion, although the evidence in this case is not as definitive as with 1987A.

Supernova 1987A was first detected in 1987, when light from the explosion of a blue supergiant star located 168,000 light-years away reached Earth. SN 1987A was an important event for astronomers. Not only was it the closest supernova to be detected in hundreds of years, it marked the first time that neutrinos had been detected from an astronomical source other than our sun.

These nearly massless subatomic particles had been predicted to be produced in large quantities during Type II explosions, so their detection during 1987A supported some of the fundamental theories about the inner workings of supernovae.

With the latest NuSTAR observations, 1987A is once again proving to be a useful natural laboratory for studying the mysteries of stellar death. For many years, supercomputer simulations performed at Caltech and elsewhere predicted that the cores of pending Type II supernovae change shape just before exploding, transforming from a perfectly symmetric sphere into a wobbly mass made up of turbulent plumes of extremely hot gas. In fact, models that assumed a perfectly spherical core just fizzled out.

"If you make everything just spherical, the core doesn't explode. It turns out you need asymmetries to make the star explode," Harrison says.

According to the simulations, the shape change is driven by turbulence generated by neutrinos that are absorbed within the core. "This turbulence helps push out a powerful shock wave and launch the explosion," says Christian Ott, a professor of theoretical physics at Caltech who was not involved in the NuSTAR observations.

Ott's team uses supercomputers to run three-dimensional simulations of core-collapse supernovae. Each simulation generates hundreds of terabytes of results—for comparison, the entire print collection of the U.S. Library of Congress is equal to about 10 terabytes—but represents only a few tenths of a second during a supernova explosion.

A better understanding of the asymmetrical nature of Type II supernovae, Ott says, could help solve one of the biggest mysteries surrounding stellar deaths: why some supernovae collapse into neutron stars and others into a black hole to form a space-time singularity. It could be that the high degree of asymmetry in some supernovae produces a dual effect: the star explodes in one direction, while the remainder of the star continues to collapse in all other directions.

"In this way, an explosion could happen, but eventually leave behind a black hole and not a neutron star," Ott says.

The NuSTAR findings also increase the chances that Advanced LIGO—the upgraded version of the Laser Interferometer Gravitational-wave Observatory, which will begin to take data later this year—will be successful in detecting gravitational waves from supernovae. Gravitational waves are ripples that propagate through the fabric of space-time. According to theory, Type II supernovae should emit gravitational waves, but only if the explosions are asymmetrical.

Harrison and Ott have plans to combine the observational and theoretical studies of supernova that until now have been occurring along parallel tracks at Caltech, using the NuSTAR observations to refine supercomputer simulations of supernova explosions.

"The two of us are going to work together to try to get the models to more accurately predict what we're seeing in 1987A and Cassiopeia A," Harrison says.

Additional Caltech coauthors of the paper, entitled "44Ti gamma-ray emission lines from SN1987A reveal an asymmetric explosion," are Hiromasa Miyasaka, Brian Grefenstette, Kristin Madsen, Peter Mao, and Vikram Rana. The research was supported by funding from NASA, the French National Center for Space Studies (CNES), the Japan Society for the Promotion of Science, and the Technical University of Denmark.

This article also references the paper "Magnetorotational Core-collapse Supernovae in Three Dimensions," which appeared in the April 20, 2014, issue of Astrophysical Journal Letters.

Frontpage Title: 
NuSTAR Observations Hold Key to Supernova Mysteries
Listing Title: 
NuSTAR Observations Hold Key to Supernova Mysteries
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

“Freezing a Bullet” to Find Clues to Ribosome Assembly Process

Researchers Figure Out How Protein-Synthesizing Cellular Machines Are Built in Stepwise Fashion

Ribosomes are vital to the function of all living cells. Using the genetic information from RNA, these large molecular complexes build proteins by linking amino acids together in a specific order. Scientists have known for more than half a century that these cellular machines are themselves made up of about 80 different proteins, called ribosomal proteins, along with several RNA molecules and that these components are added in a particular sequence to construct new ribosomes, but no one has known the mechanism that controls that process.

Now researchers from Caltech and Heidelberg University have combined their expertise to track a ribosomal protein in yeast all the way from its synthesis in the cytoplasm, the cellular compartment surrounding the nucleus of a cell, to its incorporation into a developing ribosome within the nucleus. In so doing, they have identified a new chaperone protein, known as Acl4, that ushers a specific ribosomal protein through the construction process and a new regulatory mechanism that likely occurs in all eukaryotic cells.

The results, described in a paper that appears online in the journal Molecular Cell, also suggest an approach for making new antifungal agents.

The work was completed in the labs of André Hoelz, assistant professor of chemistry at Caltech, and Ed Hurt, director of the Heidelberg University Biochemistry Center (BZH).

 

 

"We now understand how this chaperone, Acl4, works with its ribosomal protein with great precision," says Hoelz. "Seeing that is kind of like being able to freeze a bullet whizzing through the air and turn it around and analyze it in all dimensions to see exactly what it looks like."

That is because the entire ribosome assembly process—including the synthesis of new ribosomal proteins by ribosomes in the cytoplasm, the transfer of those proteins into the nucleus, their incorporation into a developing ribosome, and the completed ribosome's export back out of the nucleus into the cytoplasm—happens in the tens of minutes timescale. So quickly that more than a million ribosomes are produced per day in mammalian cells to allow for turnover and cell division. Therefore, being able to follow a ribosomal protein through that process is not a simple task.

Hurt and his team in Germany have developed a new technique to capture the state of a ribosomal protein shortly after it is synthesized. When they "stopped" this particular flying bullet, an important ribosomal protein known as L4, they found that its was bound to Acl4.

Hoelz's group at Caltech then used X-ray crystallography to obtain an atomic snapshot of Acl4 and further biochemical interaction studies to establish how Acl4 recognizes and protects L4. They found that Acl4 attaches to L4 (having a high affinity for only that ribosomal protein) as it emerges from the ribosome that produced it, akin to a hand gripping a baseball. Thereby the chaperone ensures that the ribosomal protein is protected from machinery in the cell that would otherwise destroy it and ushers the L4 molecule through the sole gateway between the nucleus and cytoplasm, called the nuclear pore complex, to the site in the nucleus where new ribosomes are constructed.

"The ribosomal protein together with its chaperone basically travel through the nucleus and screen their surroundings until they find an assembling ribosome that is at exactly the right stage for the ribosomal protein to be incorporated," explains Ferdinand Huber, a graduate student in Hoelz's group and one of the first authors on the paper. "Once found, the chaperone lets the ribosomal protein go and gets recycled to go pick up another protein."

The researchers say that Acl4 is just one example from a whole family of chaperone proteins that likely work in this same fashion.

Hoelz adds that if this process does not work properly, ribosomes and proteins cannot be made. Some diseases (including aggressive leukemia subtypes) are associated with malfunctions in this process.

"It is likely that human cells also contain a dedicated assembly chaperone for L4. However, we are certain that it has a distinct atomic structure, which might allow us to develop new antifungal agents," Hoelz says. "By preventing the chaperone from interacting with its partner, you could keep the cell from making new ribosomes. You could potentially weaken the organism to the point where the immune system could then clear the infection. This is a completely new approach."

Co-first authors on the paper, "Coordinated Ribosomal L4 Protein Assembly into the Pre-Ribosome Is Regulated by Its Eukaryote-Specific Extension," are Huber and Philipp Stelter of Heidelberg University. Additional authors include Ruth Kunze and Dirk Flemming also from Heidelberg University. The work was supported by the Boehringer Ingelheim Fonds, the V Foundation for Cancer Research, the Edward Mallinckrodt, Jr. Foundation, the Sidney Kimmel Foundation for Cancer Research, and the German Research Foundation (DFG).

 

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
Short Title: 
Figuring Out How Ribosomes Are Made
News Type: 
Research News

Switching On One-Shot Learning in the Brain

Caltech researchers find the brain regions responsible for jumping to conclusions

Most of the time, we learn only gradually, incrementally building connections between actions or events and outcomes. But there are exceptions—every once in a while, something happens and we immediately learn to associate that stimulus with a result. For example, maybe you have had bad service at a store once and sworn that you will never shop there again.

This type of one-shot learning is more than handy when it comes to survival—think, of an animal quickly learning to avoid a type of poisonous berry. In that case, jumping to the conclusion that the fruit was to blame for a bout of illness might help the animal steer clear of the same danger in the future. On the other hand, quickly drawing connections despite a lack of evidence can also lead to misattributions and superstitions; for example, you might blame a new food you tried for an illness when in fact it was harmless, or you might begin to believe that if you do not eat your usual meal, you will get sick.

Scientists have long suspected that one-shot learning involves a different brain system than gradual learning, but could not explain what triggers this rapid learning or how the brain decides which mode to use at any one time.

Now Caltech scientists have discovered that uncertainty in terms of the causal relationship—whether an outcome is actually caused by a particular stimulus—is the main factor in determining whether or not rapid learning occurs. They say that the more uncertainty there is about the causal relationship, the more likely it is that one-shot learning will take place. When that uncertainty is high, they suggest, you need to be more focused in order to learn the relationship between stimulus and outcome.

The researchers have also identified a part of the prefrontal cortex—the large brain area located immediately behind the forehead that is associated with complex cognitive activities—that appears to evaluate such causal uncertainty and then activate one-shot learning when needed.

The findings, described in the April 28 issue of the journal PLOS Biology, could lead to new approaches for helping people learn more efficiently. The work also suggests that an inability to properly attribute cause and effect might lie at the heart of some psychiatric disorders that involve delusional thinking, such as schizophrenia.

"Many have assumed that the novelty of a stimulus would be the main factor driving one-shot learning, but our computational model showed that causal uncertainty was more important," says Sang Wan Lee, a postdoctoral scholar in neuroscience at Caltech and lead author of the new paper. "If you are uncertain, or lack evidence, about whether a particular outcome was caused by a preceding event, you are more likely to quickly associate them together."

The researchers used a simple behavioral task paired with brain imaging to determine where in the brain this causal processing takes place. Based on the results, it appears that the ventrolateral prefrontal cortex (VLPFC) is involved in the processing and then couples with the hippocampus to switch on one-shot learning, as needed.

Indeed, a switch is an appropriate metaphor, says Shinsuke Shimojo, Caltech's Gertrude Baltimore Professor of Experimental Psychology. Since the hippocampus is known to be involved in so-called episodic memory, in which the brain quickly links a particular context with an event, the researchers hypothesized that this brain region might play a role in one-shot learning. But they were surprised to find that the coupling between the VLPFC and the hippocampus was either all or nothing. "Like a light switch, one-shot learning is either on, or it's off," says Shimojo.

In the behavioral study, 47 participants completed a simple causal-inference task; 20 of those participants completed the study in the Caltech Brain Imaging Center, where their brains were monitored using functional Magnetic Resonance Imaging. The task consisted of multiple trials. During each trial, participants were shown a series of five images one at a time on a computer screen. Over the course of the task, some images appeared multiple times, while others appeared only once or twice. After every fifth image, either a positive or negative monetary outcome was displayed. Following a number of trials, participants were asked to rate how strongly they thought each image and outcome were linked. As the task proceeded, participants gradually learned to associate some of the images with particular outcomes. One-shot learning was apparent in cases where participants made an association between an image and an outcome after a single pairing.

The researchers hypothesize that the VLPFC acts as a controller mediating the one-shot learning process. They caution, however, that they have not yet proven that the brain region actually controls the process in that way. To prove that, they will need to conduct additional studies that will involve modifying the VLPFC's activity with brain stimulation and seeing how that directly affects behavior.

Still, the researchers are intrigued by the fact that the VLPFC is very close to another part of the ventrolateral prefrontal cortex that they previously found to be involved in helping the brain to switch between two other forms of learning—habitual and goal-directed learning, which involve routine behavior and more carefully considered actions, respectively. "Now we might cautiously speculate that a significant general function of the ventrolateral prefrontal cortex is to act as a leader, telling other parts of the brain involved in different types of behavioral functions when they should get involved and when they should not get involved in controlling our behavior," says coauthor John O'Doherty, professor of psychology and director of the Caltech Brain Imaging Center.

The work, "Neural Computations Mediating One-Shot Learning in the Human Brain," was supported by the National Institutes of Health, the Gordon and Betty Moore Foundation, the Japan Science and Technology Agency–CREST, and the Caltech-Tamagawa global Center of Excellence.

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Switching on One-Shot Learning in the Brain
Listing Title: 
Switching on One-Shot Learning in the Brain
Writer: 
Exclude from News Hub: 
No
Short Title: 
One-Shot Learning in the Brain
News Type: 
Research News

Tracking Photosynthesis from Space

Watching plants perform photosynthesis from space sounds like a futuristic proposal, but a new application of data from NASA's Orbiting Carbon Observatory-2 (OCO-2) satellite may enable scientists to do just that. The new technique, which allows researchers to analyze plant productivity from far above Earth, will provide a clearer picture of the global carbon cycle and may one day help researchers determine the best regional farming practices and even spot early signs of drought.

When plants are alive and healthy, they engage in photosynthesis, absorbing sunlight and carbon dioxide to produce food for the plant, and generating oxygen as a by-product. But photosynthesis does more than keep plants alive. On a global scale, the process takes up some of the man-made emissions of atmospheric carbon dioxide—a greenhouse gas that traps the sun's heat down on Earth—meaning that plants also have an important role in mitigating climate change.

To perform photosynthesis, the chlorophyll in leaves absorbs sunlight—most of which is used to create food for the plants or is lost as heat. However, a small fraction of that absorbed light is reemitted as near-infrared light. We cannot see in the near-infrared portion of the spectrum with the naked eye, but if we could, this reemitted light would make the plants appear to glow—a property called solar induced fluorescence (SIF). Because this reemitted light is only produced when the chlorophyll in plants is also absorbing sunlight for photosynthesis, SIF can be used as a way to determine a plant's photosynthetic activity and productivity.

"The intensity of the SIF appears to be very correlated with the total productivity of the plant," says JPL scientist Christian Frankenberg, who is lead for the SIF product and will join the Caltech faculty in September as an associate professor of environmental science and engineering in the Division of Geological and Planetary Sciences.

Usually, when researchers try to estimate photosynthetic activity from satellites, they utilize a measure called the greenness index, which uses reflections in the near-infrared spectrum of light to determine the amount of chlorophyll in the plant. However, this is not a direct measurement of plant productivity; a plant that contains chlorophyll is not necessarily undergoing photosynthesis. "For example," Frankenberg says, "evergreen trees are green in the winter even when they are dormant."

He adds, "When a plant starts to undergo stress situations, like in California during a summer day when it's getting very hot and dry, the plants still have chlorophyll"—chlorophyll that would still appear to be active in the greenness index—"but they usually close the tiny pores in their leaves to reduce water loss, and that time of stress is also when SIF is reduced. So photosynthesis is being very strongly reduced at the same time that the fluorescence signal is also getting weaker, albeit at a smaller rate."

The Caltech and JPL team, as well as colleagues from NASA Goddard, discovered that they could measure SIF from orbit using spectrometers—standard instruments that can detect light intensity—that are already on board satellites like Japan's Greenhouse Gases Observing Satellite (GOSAT) and NASA's OCO-2.

In 2014, using this new technique with data from GOSAT and the European Global Ozone Monitoring Experiment–2 satellite, the researchers scoured the globe for the most productive plants and determined that the U.S. "Corn Belt"—the farming region stretching from Ohio to Nebraska—is the most photosynthetically active place on the planet. Although it stands to reason that a cornfield during growing season would be actively undergoing photosynthesis, the high-resolution measurements from a satellite enabled global comparison to other plant-heavy regions—such as tropical rainforests.

"Before, when people used the greenness index to represent active photosynthesis, they had trouble determining the productivity of very dense plant areas, such as forests or cornfields. With enough green plant material in the field of view, these greenness indexes can saturate; they reach a maximum value they can't exceed," Frankenberg says. Because of the sensitivity of the SIF measurements, researchers can now compare the true productivity of fields from different regions without this saturation—information that could potentially be used to compare the efficiency of farming practices around the world.

Now that OCO-2 is online and producing data, Frankenberg says that it is capable of achieving higher resolution than the preliminary experiments with GOSAT. Therefore, OCO-2 will be able to provide an even clearer picture of plant productivity worldwide. However, to get more specific information about how plants influence the global carbon cycle, an evenly distributed ground-based network of spectrometers will be needed. Such a network—located down among the plants rather than miles above—will provide more information about regional uptake of carbon dioxide via photosynthesis and the mechanistic link between SIF and actual carbon exchange.

One existing network, called FLUXNET, uses ground-based towers to measure the exchange of carbon dioxide, or carbon flux, between the land and the atmosphere from towers at more than 600 locations worldwide. However, the towers only measure the exchange of carbon dioxide and are unable to directly observe the activities of the biosphere that drive this exchange.

The new ground-based measurements will ideally take place at existing FLUXNET sites, but they will be performed with a small set of high-resolution spectrometers—similar to the kind that OCO-2 uses—to allow the researchers to use the same measurement principles they developed for space. The revamped ground network was initially proposed in a 2012 workshop at the Keck Institute for Space Studies and is expected to go online sometime in the next two years.

In the future, a clear picture of global plant productivity could influence a range of decisions relevant to farmers, commodity traders, and policymakers. "Right now, the SIF data we can gather from space is too coarse of a picture to be really helpful for these conversations, but, in principle, with the satellite and ground-based measurements you could track the fluorescence in fields at different times of day," he says. This hourly tracking would not only allow researchers to detect the productivity of the plants, but it could also spot the first signs of plant stress—a factor that impacts crop prices and food security around the world.

"The measurements of SIF from OCO-2 greatly extend the science of this mission", says Paul Wennberg, R. Stanton Avery Professor of Atmospheric Chemistry and Environmental Science and Engineering, director of the Ronald and Maxine Linde Center for Global Environmental Science, and a member of the OCO-2 science team. "OCO-2 was designed to map carbon dioxide, and scientists plan to use these measurements to determine the underlying sources and sinks of this important gas. The new SIF measurements will allow us to diagnose the efficiency of the plants—a key component of the sinks of carbon dioxide."

By using OCO-2 to diagnose plant activity around the globe, this new research could also contribute to understanding the variability in crop primary productivity and also, eventually, the development of technologies that can improve crop efficiency—a goal that could greatly benefit humankind, Frankenberg says.

This project is funded by the Keck Institute for Space Studies and JPL. Wennberg is also an executive officer for the Environmental Science and Engineering (ESE) program. ESE is a joint program of the Division of Engineering and Applied Science, the division of Chemistry and Chemical Engineering, and the Division of Geological and Planetary Sciences.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

How an RNA Gene Silences a Whole Chromosome

Researchers at Caltech have discovered how an abundant class of RNA genes, called long non-coding RNAs (lncRNAs, pronounced link RNAs) can regulate key genes. By studying an important lncRNA, called Xist, the scientists identified how this RNA gathers a group of proteins and ultimately prevents women from having an extra functional X-chromosome—a condition in female embryos that leads to death in early development. These findings mark the first time that researchers have uncovered the detailed mechanism of action for lncRNA genes.

"For years, we thought about genes as just DNA sequences that encode proteins, but those genes only make up about 1 percent of the genome. Mammalian genomes also encode many thousands of lncRNAs," says Assistant Professor of Biology Mitch Guttman, who led the study published online in the April 27 issue of the journal Nature. These lncRNAs such as Xist play a structural role, acting to scaffold—or bring together and organize—the key proteins involved in cellular and molecular processes, such as gene expression and stem cell differentiation.

Guttman, who helped to discover an entire class of lncRNAs as a graduate student at MIT in 2009, says that although most of these genes encoded in our genomes have only recently been appreciated, there are several specific examples of lncRNA genes that have been known for decades. One well-studied example is Xist, which is important for a process called X chromosome inactivation.

All females are born with two X chromosomes in every cell, one inherited from their mother and one from their father. In contrast, males only contain one X chromosome (along with a Y chromosome). However, like males, females only need one copy of each X-chromosome gene—having two copies is an abnormality that will lead to death early during development. The genome skirts these problems by essentially "turning off" one X chromosome in every cell.

Previous research showed that Xist is essential to this process and does this by somehow preventing transcription, the initial step of the expression of genes on the X chromosome. However, because Xist is not a traditional protein-coding gene, until now researchers have had trouble figuring out exactly how Xist stops transcription and shuts down an entire chromosome.

"To start to make sense of what makes lncRNAs special and how they can control all of these different cellular processes, we need to be able to understand the mechanism of how any lncRNA gene can work. Because Xist is such an important molecule and because so much is known about what it does, it seemed like a great system to try to dissect the mechanisms of how it and other lncRNAs work," Guttman says.

lncRNAs are known to corral and organize the proteins that are necessary for cellular processes, so Guttman and his colleagues began their study of the function of Xist by first developing a technique to find out what proteins it naturally interacts with in the cell. With a new method, called RNA antisense purification with mass spectrometry (RAP-MS), the researchers extracted and purified Xist lncRNA molecules, as well as the proteins that directly interact with Xist, from mouse embryonic stem cells. Then, collaborators at the Proteome Exploration Laboratory at Caltech applied a technique called quantitative mass spectrometry to identify those interacting proteins.

"RNA usually only obeys one rule: binding to proteins. RAP-MS is like a molecular microscope into identifying RNA-protein interactions," says John Rinn, associate professor of stem cell and regenerative biology at Harvard University, who was not involved in the study. "RAP-MS will provide critically needed insights into how lncRNAs function to organize proteins and in turn regulate gene expression."

Applying this to Xist uncovered 10 specific proteins that interact with Xist. Of these, three—SAF-A (Scaffold attachment factor-A), LBR (Lamin B Receptor), and SHARP (SMRT and HDAC associated repressor protein)—are essential for X chromosome inactivation. "Before this experiment," Guttman says, "no one knew a single protein that was required by Xist for silencing transcription on the X chromosome, but with this method we immediately found three that are essential. If you lose any one of them, Xist doesn't work—it will not silence the X chromosome during development."

The new findings provide the first detailed picture of how lncRNAs work within a cellular process. Through further analysis, the researchers found that these three proteins performed three distinct, but essential, roles. SAF-A helps to tether Xist and all of its hitchhiking proteins to the DNA of the X chromosome, at which point LBR remodels the chromosome so that it is less likely to be expressed. The actual "silencing," Guttman and his colleagues discovered, is done by the third protein of the trio: SHARP.

To produce functional proteins from the DNA (genes) of a chromosome, the genes must first be transcribed into RNA by an enzyme called RNA polymerase II. Guttman and his team found that SHARP leads to the exclusion of polymerase from the DNA, thus preventing transcription and gene expression.

This information soon may have clinical applications. The Xist lncRNA silences the X chromosome simply because it is located on the X chromosome. However, previous studies have demonstrated that this RNA and its silencing machinery can be used to inactivate other chromosomes—for example, the third copy of chromosome 21 that is present in individuals with Downs' syndrome.

"We are starting to pick apart how lncRNAs work. We now know, for example, how Xist localizes to sites on X, how it silences transcription, and how it can change DNA structure," Guttman says. "One of the things that is really exciting for me is that we can potentially leverage the principles used by lncRNAs, move them around in the genome, and use them as therapeutic agents to target specific defective pathways in disease."

"But I think the real reason why this is so important for our field and even beyond is because this is a different type of regulation than we've seen before in the cell—it is a vast world that we previously knew nothing about," he adds.

This work was published in a recent paper titled: "The Xist lncRNA interacts directly with SHARP to silence transcription through HDAC3." The co-first authors of the paper are Caltech postdoctoral scholar Colleen A. McHugh and graduate student Chun-Kan Chen. Other coauthors from Caltech are Amy Chow, Christine F. Surka, Christina Tran, Mario Blanco, Christina Burghard, Annie Moradian, Alexander A. Shishkin, Julia Su, Michael J. Sweredoski, and Sonja Hess from the Proteome Exploration Laboratory. Additional authors include Amy Pandya-Jones and Kathrin Plath from UCLA and Patrick McDonel from MIT.

The study was supported by funding from the Gordon and Betty Moore Foundation, the Beckman Institute, the National Institutes of Health, the Rose Hills Foundation, the Edward Mallinckrodt Foundation, the Sontag Foundation, and the Searle Scholars Program.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Weighing—and Imaging—Molecules One at a Time

Building on their creation of the first-ever mechanical device that can measure the mass of individual molecules, one at a time, a team of Caltech scientists and their colleagues have created nanodevices that can also reveal their shape. Such information is crucial when trying to identify large protein molecules or complex assemblies of protein molecules.

"You can imagine that with large protein complexes made from many different, smaller subunits there are many ways for them to be assembled. These can end up having quite similar masses while actually being different species with different biological functions. This is especially true with enzymes, proteins that mediate chemical reactions in the body, and membrane proteins that control a cell's interactions with its environment," explains Michael Roukes, the Robert M. Abbey Professor of Physics, Applied Physics, and Bioengineering at Caltech and the co-corresponding author of a paper describing the technology that appeared March 30 in the online issue of the journal Nature Nanotechnology.

One foundation of the genomics revolution has been the ability to replicate DNA or RNA molecules en masse using the polymerase chain reaction to create the many millions of copies necessary for typical sequencing and analysis. However, the same mass-production technology does not work for copying proteins. Right now, if you want to properly identify a particular protein, you need a lot of it—typically millions of copies of just the protein of interest, with very few other extraneous proteins as contaminants. The average mass of this molecular population is then evaluated with a technique called mass spectrometry, in which the molecules are ionized—so that they attain an electrical charge—and then allowed to interact with an electromagnetic field. By analyzing this interaction, scientists can deduce the molecular mass-to-charge ratio.

But mass spectrometry often cannot discriminate subtle but crucial differences in molecules having similar mass-to-charge ratios. "With mass spectrometry today," explains Roukes, "large molecules and molecular complexes are first chopped up into many smaller pieces, that is, into smaller molecule fragments that existing instruments can handle. These different fragments are separately analyzed, and then bioinformatics—involving computer simulations—are used to piece the puzzle back together. But this reassembly process can be thwarted if pieces of different complexes are mixed up together."

With their devices, Roukes and his colleagues can measure the mass of an individual intact molecule. Each device—which is only a couple millionths of a meter in size or smaller—consists of a vibrating structure called a nanoelectromechanical system (NEMS) resonator. When a particle or molecule lands on the nanodevice, the added mass changes the frequency at which the structure vibrates, much like putting drops of solder on a guitar string would change the frequency of its vibration and resultant tone. The induced shifts in frequency provide information about the mass of the particle. But they also, as described in the new paper, can be used to determine the three-dimensional spatial distribution of the mass: i.e., the particle's shape.

"A guitar string doesn't just vibrate at one frequency," Roukes says. "There are harmonics of its fundamental tone, or so-called vibrational modes. What distinguishes a violin string from a guitar string is really the different admixtures of these different harmonics of the fundamental tone. The same applies here. We have a whole bunch of different tones that can be excited simultaneously on each of our nanodevices, and we track many different tones in real time. It turns out that when the molecule lands in different orientations, those harmonics are shifted differently. We can then use the inertial imaging theory that we have developed to reconstruct an image in space of the shape of the molecule."

"The new technique uncovers a previously unrealized capability of mechanical sensors," says Professor Mehmet Selim Hanay of Bilkent University in Ankara, Turkey, a former postdoctoral researcher in the Roukes lab and co-first author of the paper. "Previously we've identified molecules, such as the antibody IgM, based solely on their molecular weights. Now, by enabling both the molecular weight and shape information to be deduced for the same molecule simultaneously, the new technique can greatly enhance the identification process, and this is of significance both for basic research and the pharmaceutical industry." 

Currently, molecular structures are deciphered using X-ray crystallography, an often laborious technique that involves isolating, purifying, and then crystallizing molecules, and then evaluating their shape based on the diffraction patterns produced when x-rays interact with the atoms that together form the crystals. However, many complex biological molecules are difficult if not impossible to crystallize. And, even when they can be crystallized, the molecular structure obtained represents the molecule in the crystalline state, which can be very different from the structure of the molecule in its biologically active form.

"You can imagine situations where you don't know exactly what you are looking for—where you are in discovery mode, and you are trying to figure out the body's immune response to a particular pathogen, for example," Roukes says. In these cases, the ability to carry out single-molecule detection and to get as many separate bits of information as possible about that individual molecule greatly improves the odds of making a unique identification.

"We say that cancer begins often with a single aberrant cell, and what that means is that even though it might be one of a multiplicity of similar cells, there is something unique about the molecular composition of that one cell. With this technique, we potentially have a new tool to figure out what is unique about it," he adds.

So far, the new technique has been validated using particles of known sizes and shapes, such as polymer nanodroplets. Roukes and colleagues show that with today's state-of-the-art nanodevices, the approach can provide molecular-scale resolution—that is, provide the ability to see the molecular subcomponents of individual, intact protein assemblies. The group's current efforts are now focused on such explorations.

Scott Kelber, a former graduate student in the Roukes lab, is the other co-first author of the paper, titled "Inertial imaging with nanoelectromechanical systems." Professor John Sader of the University of Melbourne, Australia, and a visiting associate in physics at Caltech, is the co-corresponding author. Additional coauthors are Cathal D. O'Connell and Paul Mulvaney of the University of Melbourne. The work was funded by a National Institutes of Health Director's Pioneer award, a Caltech Kavli Nanoscience Institute Distinguished Visiting Professorship, the Fondation pour la Recherche et l'Enseignement Superieur in Paris, and the Australian Research Council grants scheme.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Teaser Image: 

Chemists Create “Comb” that Detects Terahertz Waves with Extreme Precision

Light can come in many frequencies, only a small fraction of which can be seen by humans. Between the invisible low-frequency radio waves used by cell phones and the high frequencies associated with infrared light lies a fairly wide swath of the electromagnetic spectrum occupied by what are called terahertz, or sometimes submillimeter, waves. Exploitation of these waves could lead to many new applications in fields ranging from medical imaging to astronomy, but terahertz waves have proven tricky to produce and study in the laboratory. Now, Caltech chemists have created a device that generates and detects terahertz waves over a wide spectral range with extreme precision, allowing it to be used as an unparalleled tool for measuring terahertz waves.

The new device is an example of what is known as a frequency comb, which uses ultrafast pulsed lasers, or oscillators, to produce thousands of unique frequencies of radiation distributed evenly across a spectrum like the teeth of a comb. Scientists can then use them like rulers, lining up the teeth like tick marks to very precisely measure light frequencies. The first frequency combs, developed in the 1990s, earned their creators (John Hall of JILA and Theordor Hánsch of the Max Planck Institute of Quantum Optics and Ludwig Maximilians University Munich) the 2005 Nobel Prize in physics. These combs, which originated in the visible part of the spectrum, have revolutionized how scientists measure light, leading, for example, to the development of today's most accurate timekeepers, known as optical atomic clocks.

The team at Caltech combined commercially available lasers and optics with custom-built electronics to extend this technology to the terahertz, creating a terahertz frequency comb with an unprecedented combination of spectral coverage and precision. Its thousands of "teeth" are evenly spaced across the majority of the terahertz region of the spectrum (0.15-2.4 THz), giving scientists a way to simultaneously measure absorption in a sample at all of those frequencies.

The work is described in a paper that appears in the online version of the journal Physical Review Letters and will be published in the April 24 issue. The lead author is graduate student and National Science Foundation fellow Ian Finneran, who works in the lab of Geoffrey A. Blake, professor of cosmochemistry and planetary sciences and professor of chemistry at Caltech.

Blake explains the utility of the new device, contrasting it with a common radio tuner. "With radio waves, most tuners let you zero in on and listen to just one station, or frequency, at a time," he says. "Here, in our terahertz approach, we can separate and process more than 10,000 frequencies all at once. In the near future, we hope to bump that number up to more than 100,000."

That is important because the terahertz region of the spectrum is chock-full of information. Everything in the universe that is warmer than about 10 degrees Kelvin (-263 degrees Celsius) gives off terahertz radiation. Even at these very low temperatures molecules can rotate in space, yielding unique fingerprints in the terahertz. Astronomers using telescopes such as Caltech's Submillimeter Observatory, the Atacama Large Millimeter Array, and the Herschel Space Observatory are searching stellar nurseries and planet-forming disks at terahertz frequencies, looking for such chemical fingerprints to try to determine the kinds of molecules that are present and thus available to planetary systems. But in just a single chunk of the sky, it would not be unusual to find signatures of 25 or more different molecules.

To be able to definitively identify specific molecules within such a tangle of terahertz signals, scientists first need to determine exact measurements of the chemical fingerprints associated with various molecules. This requires a precise source of terahertz waves, in addition to a sensitive detector, and the terahertz frequency comb is ideal for making such measurements in the lab.

"When we look up into space with terahertz light, we basically see this forest of lines related to the tumbling motions of various molecules," says Finneran. "Unraveling and understanding these lines is difficult, as you must trek across that forest one point and one molecule at a time in the lab. It can take weeks, and you would have to use many different instruments. What we've developed, this terahertz comb, is a way to analyze the entire forest all at once."

After the device generates its tens of thousands of evenly spaced frequencies, the waves travel through a sample—in the paper, the researchers provide the example of water vapor. The instrument then measures what light passes through the sample and what gets absorbed by molecules at each tooth along the comb. If a detected tooth gets shorter, the sample absorbed that particular terahertz wave; if it comes through at the baseline height, the sample did not absorb at that frequency.

"Since we know exactly where each of the tick marks on our ruler is to about nine digits, we can use this as a diagnostic tool to get these frequencies really, really precisely," says Finneran. "When you look up in space, you want to make sure that you have such very exact measurements from the lab."

In addition to the astrochemical application of identifying molecules in space, the terahertz comb will also be useful for studying fundamental interactions between molecules. "The terahertz is unique in that it is really the only direct way to look not only at vibrations within individual large molecules that are important to life, but also at vibrations between different molecules that govern the behavior of liquids such as water," says Blake.

Additional coauthors on the paper, "Decade-Spanning High-Precision Terahertz Frequency Comb," include current Caltech graduate students Jacob Good, P. Brandon Carroll, and Marco Allodi, as well as recent graduate Daniel Holland (PhD '14). The work was supported by funding from the National Science Foundation.

Writer: 
Kimm Fesenmaier
Frontpage Title: 
“Combing” Through Terahertz Waves
Listing Title: 
“Combing” Through Terahertz Waves
Contact: 
Writer: 
Exclude from News Hub: 
No
Short Title: 
“Combing” Through Terahertz Waves
News Type: 
Research News

More Money, Same Bankruptcy Risk

In general, our financial lives follow a pattern of spending and saving described by a time-honored model that economists call the life-cycle hypothesis. Most people begin their younger years strapped for cash, earning little money while also investing heavily in skills and education. As the years go by, career advances result in higher income, which can be used to pay off debts incurred early on and to save for retirement. Indeed everyone is well aware that later in life earnings will drop and spending will outpace savings.

But how does the life-cycle hypothesis hold up when the income pattern is reversed—such as in the case of young, multimillionaire NFL players who earn large sums at first, but then experience drastic income reductions in retirement just a few years later? Not too well, a new Caltech study suggests.

The study, led by Colin Camerer, Robert Kirby Professor of Behavioral Economics, was published as a working paper on April 13 by the National Bureau of Economic Research.

"The life-cycle hypothesis in economics assumes people have perfect willpower and are realistic about how long their careers will last. Behavioral economics predicts something different, that even NFL players earning huge salaries will struggle to save enough," Camerer says.

"We wanted to test this theory with NFL players because there is a lot of tension between their income in the present, as a player, and their expected income in the future, after retirement. NFL players put the theory to a really extreme test," says graduate student Kyle Carlson, the first author of the study. "We suspected that NFL players' behavior might differ from the theory because they may be too focused on the present or overconfident about their career prospects. We had also seen many media reports of players struggling with their finances."

A professional football player's career is not like that of the average person. Rather than finding an entry-level job that pays a pittance when just out of college, a football player can earn millions of dollars—more than the average person makes in an entire lifetime—in just one season. However, the young athlete's lucrative career is also likely to be short-lived. After just a few years, most pro football players are out of the game with injuries and are forced into retirement and, usually, a much smaller income. And that is when the financial troubles often begin to surface.

The researchers decided to see how the life-cycle model would respond in such a feast-or-famine income situation. They entered the publicly available income data from NFL players into a simulation to predict how well players should fare in retirement, based on their income and the model. The simulations suggested that the players' initial earnings should support them through their entire retirement. In other words, these players should never go bankrupt.

However, when the researchers looked at what actually happens, they found that approximately 2 percent of players have filed for bankruptcy within just two years of retirement, and more than 15 percent file within 12 years after retirement. "Two percent is not itself an enormous number. But the players look similar to regular people who are making way less money," Carlson says. "The players have the capacity to avoid bankruptcy by planning carefully, but many are not doing that."

Interestingly, Carlson and his colleagues also determined that a player's career earnings and time in the league had no effect on the risk of bankruptcy. That is, although a player who earned $20 million over a 10-year career should have substantially more money to support his retirement, he actually is just as likely to go bankrupt as someone who only earned $2 million in one year. Regardless of career length, the risk of bankruptcy was about the same. "It stands to reason that making more money should protect you from bankruptcy, but for these guys it doesn't," Carlson says.

The results of the study are clear: the life-cycle model does not seem to match up with the income spikes and dips of a career athlete. The cause of this disconnect between theory and reality, however, is less apparent, Carlson says.

"There are many reasons why the players may struggle to manage their high incomes," says Carlson. For example, the players, many of whom are drafted directly out of college, often do not have any experience in business or finance. Many come from economically disadvantaged backgrounds. In addition, players may be pressured to spend by other high-earning teammates.

This work raises questions for future research both for behavioral economists and for scholars of personal finance. Because football players, by nature, might be more willing to take risks than the average person, are they also more willing also make risky financial decisions? Are football players perhaps saving for retirement early in their careers, but later using bankruptcy as a tool to eliminate spending debt?

"Indeed it may well be that these high rates of bankruptcies are partly driven by the risk attitudes of football players and partly driven by regulatory practices that shield retirements assets from bankruptcy procedures," says Jean-Laurent Rosenthal, the Rea A. and Lela G. Axline Professor of Business Economics and chair of the Division of Humanities and Social Sciences, who also specializes in the field of behavioral economics.

"These results don't say why the players have a higher incidence of bankruptcy than the model would predict. We plan to investigate that in the future with additional modeling and data," Carlson says. "The one thing that we know right now is that there's something going on with these players that is different from what's in the model."

The study was published in a working paper titled, "Bankruptcy Rates among NFL Players with Short-Lived Income Spikes." In addition to Carlson and Camerer, additional coauthors include Joshua Kim from the University of Washington and Annamaria Lusardi of the George Washington University. Camerer's work is supported by a grant from the MacArthur Foundation.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

An Earthquake Warning System in Our Pockets?

Researchers Test Smartphones for Advance-Notice System

While you are checking your email, scrolling through social-media feeds, or just going about your daily life with your trusty smartphone in your pocket, the sensors in that little computer could also be contributing to an earthquake early warning system. So says a new study led by researchers at Caltech and the United States Geological Survey (USGS). The study suggests that all of our phones and other personal electronic devices could function as a distributed network, detecting any ground movements caused by a large earthquake, and, ultimately, giving people crucial seconds to prepare for a temblor.

"Crowd-sourced alerting means that the community will benefit by data generated by the community," said Sarah Minson (PhD '10), a USGS geophysicist and lead author of the study, which appears in the April 10 issue of the new journal Science Advances. Minson completed the work while a postdoctoral scholar at Caltech in the laboratory of Thomas Heaton, professor of engineering seismology.

Earthquake early warning (EEW) systems detect the start of an earthquake and rapidly transmit warnings to people and automated systems before they experience shaking at their location. While much of the world's population is susceptible to damaging earthquakes, EEW systems are currently operating in only a few regions around the globe, including Japan and Mexico. "Most of the world does not receive earthquake warnings mainly due to the cost of building the necessary scientific monitoring networks," says USGS geophysicist and project lead Benjamin Brooks.

Despite being less accurate than scientific-grade equipment, the GPS receivers in smartphones are sufficient to detect the permanent ground movement, or displacement, caused by fault motion in earthquakes that are approximately magnitude 7 and larger. And, of course, they are already widely distributed. Once displacements are detected by participating users' phones, the collected information could be analyzed quickly in order to produce customized earthquake alerts that would then be transmitted back to users.

"Thirty years ago it took months to assemble a crude picture of the deformations from an earthquake. This new technology promises to provide a near-instantaneous picture with much greater resolution," says Heaton, a coauthor of the new study.

In the study, the researchers tested the feasibility of crowd-sourced EEW with a simulation of a hypothetical magnitude 7 earthquake, and with real data from the 2011 magnitude 9 Tohoku-oki, Japan earthquake. The results show that crowd-sourced EEW could be achieved with only a tiny percentage of people in a given area contributing information from their smartphones. For example, if phones from fewer than 5,000 people in a large metropolitan area responded, the earthquake could be detected and analyzed fast enough to issue a warning to areas farther away before the onset of strong shaking.

The researchers note that the GPS receivers in smartphones and similar devices would not be sufficient to detect earthquakes smaller than magnitude 7, which could still be potentially damaging. However, smartphones also have microelectromechanical systems (MEMS) accelerometers that are capable of recording any earthquake motions large enough to be felt; this means that smartphones may be useful in earthquakes as small as magnitude 5. In a separate project, Caltech's Community Seismic Network Project has been developing the framework to record and utilize data from an inexpensive array of such MEMS accelerometers.

Comprehensive EEW requires a dense network of scientific instruments. Scientific-grade EEW, such as the USGS's ShakeAlert system that is currently being implemented on the west coast of the United States, will be able to help minimize the impact of earthquakes over a wide range of magnitudes. However, in many parts of the world where there are insufficient resources to build and maintain scientific networks but consumer electronics are increasingly common, crowd-sourced EEW has significant potential.

"The U.S. earthquake early warning system is being built on our high-quality scientific earthquake networks, but crowd-sourced approaches can augment our system and have real potential to make warnings possible in places that don't have high-quality networks," says Douglas Given, USGS coordinator of the ShakeAlert Earthquake Early Warning System. The U.S. Agency for International Development has already agreed to fund a pilot project, in collaboration with the Chilean Centro Sismólogico Nacional, to test a pilot hybrid earthquake warning system comprising stand-alone smartphone sensors and scientific-grade sensors along the Chilean coast.

"Crowd-sourced data are less precise, but for larger earthquakes that cause large shifts in the ground surface, they contain enough information to detect that an earthquake has occurred, information necessary for early warning," says study coauthor Susan Owen of JPL.

Additional coauthors on the paper, "Crowdsourced earthquake early warning," are from the USGS, Carnegie Mellon University–Silicon Valley, and the University of Houston. The work was supported in part by the Gordon and Betty Moore Foundation, the USGS Innovation Center for Earth Sciences, and the U.S. Department of Transportation Office of the Assistant Secretary for Research and Technology.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Explaining Saturn’s Great White Spots

Every 20 to 30 years, Saturn's atmosphere roils with giant, planet-encircling thunderstorms that produce intense lightning and enormous cloud disturbances. The head of one of these storms—popularly called "great white spots," in analogy to the Great Red Spot of Jupiter—can be as large as Earth. Unlike Jupiter's spot, which is calm at the center and has no lightning, the Saturn spots are active in the center and have long tails that eventually wrap around the planet.

Six such storms have been observed on Saturn over the past 140 years, alternating between the equator and midlatitudes, with the most recent emerging in December 2010 and encircling the planet within six months. The storms usually occur when Saturn's northern hemisphere is most tilted toward the sun. Just what triggers them and why they occur so infrequently, however, has been unclear.

Now, a new study by two Caltech planetary scientists suggests a possible cause for these storms. The study was published April 13 in the advance online issue of the journal Nature Geoscience.

Using numerical modeling, Professor of Planetary Science Andrew Ingersoll and his graduate student Cheng Li simulated the formation of the storms and found that they may be caused by the weight of the water molecules in the planet's atmosphere. Because these water molecules are heavy compared to the hydrogen and helium that comprise most of the gas-giant planet's atmosphere, they make the upper atmosphere lighter when they rain out, and that suppresses convection.

Over time, this leads to a cooling of the upper atmosphere. But that cooling eventually overrides the suppressed convection, and warm moist air rapidly rises and triggers a thunderstorm. "The upper atmosphere is so cold and so massive that it takes 20 to 30 years for this cooling to trigger another storm," says Ingersoll.

Ingersoll and Li found that this mechanism matches observations of the great white spot of 2010 taken by NASA's Cassini spacecraft, which has been observing Saturn and its moons since 2004.

The researchers also propose that the absence of planet-encircling storms on Jupiter could be explained if Jupiter's atmosphere contains less water vapor than Saturn's atmosphere. That is because saturated gas (gas that contains the maximum amount of moisture that it can hold at a particular temperature) in a hydrogen-helium atmosphere goes through a density minimum as it cools. That is, it first becomes less dense as the water precipitates out, and then it becomes more dense as cooling proceeds further. "Going through that minimum is key to suppressing the convection, but there has to be enough water vapor to start with," says Li.

Ingersoll and Li note that observations by the Galileo spacecraft and the Hubble Space Telescope indicate that Saturn does indeed have enough water to go through this density minimum, whereas Jupiter does not. In November 2016, NASA's Juno spacecraft, now en route to Jupiter, will start measuring the water abundance on that planet. "That should help us understand not only the meteorology but also the planet's formation, since water is expected to be the third most abundant molecule after hydrogen and helium in a giant planet atmosphere," Ingersoll says.

The work in the paper, "Moist convection in hydrogen atmospheres and the frequency of Saturn's giant storms," was supported by the National Science Foundation and the Cassini Project of NASA.

Writer: 
Kathy Svitil
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news