Tracking Photosynthesis from Space

Watching plants perform photosynthesis from space sounds like a futuristic proposal, but a new application of data from NASA's Orbiting Carbon Observatory-2 (OCO-2) satellite may enable scientists to do just that. The new technique, which allows researchers to analyze plant productivity from far above Earth, will provide a clearer picture of the global carbon cycle and may one day help researchers determine the best regional farming practices and even spot early signs of drought.

When plants are alive and healthy, they engage in photosynthesis, absorbing sunlight and carbon dioxide to produce food for the plant, and generating oxygen as a by-product. But photosynthesis does more than keep plants alive. On a global scale, the process takes up some of the man-made emissions of atmospheric carbon dioxide—a greenhouse gas that traps the sun's heat down on Earth—meaning that plants also have an important role in mitigating climate change.

To perform photosynthesis, the chlorophyll in leaves absorbs sunlight—most of which is used to create food for the plants or is lost as heat. However, a small fraction of that absorbed light is reemitted as near-infrared light. We cannot see in the near-infrared portion of the spectrum with the naked eye, but if we could, this reemitted light would make the plants appear to glow—a property called solar induced fluorescence (SIF). Because this reemitted light is only produced when the chlorophyll in plants is also absorbing sunlight for photosynthesis, SIF can be used as a way to determine a plant's photosynthetic activity and productivity.

"The intensity of the SIF appears to be very correlated with the total productivity of the plant," says JPL scientist Christian Frankenberg, who is lead for the SIF product and will join the Caltech faculty in September as an associate professor of environmental science and engineering in the Division of Geological and Planetary Sciences.

Usually, when researchers try to estimate photosynthetic activity from satellites, they utilize a measure called the greenness index, which uses reflections in the near-infrared spectrum of light to determine the amount of chlorophyll in the plant. However, this is not a direct measurement of plant productivity; a plant that contains chlorophyll is not necessarily undergoing photosynthesis. "For example," Frankenberg says, "evergreen trees are green in the winter even when they are dormant."

He adds, "When a plant starts to undergo stress situations, like in California during a summer day when it's getting very hot and dry, the plants still have chlorophyll"—chlorophyll that would still appear to be active in the greenness index—"but they usually close the tiny pores in their leaves to reduce water loss, and that time of stress is also when SIF is reduced. So photosynthesis is being very strongly reduced at the same time that the fluorescence signal is also getting weaker, albeit at a smaller rate."

The Caltech and JPL team, as well as colleagues from NASA Goddard, discovered that they could measure SIF from orbit using spectrometers—standard instruments that can detect light intensity—that are already on board satellites like Japan's Greenhouse Gases Observing Satellite (GOSAT) and NASA's OCO-2.

In 2014, using this new technique with data from GOSAT and the European Global Ozone Monitoring Experiment–2 satellite, the researchers scoured the globe for the most productive plants and determined that the U.S. "Corn Belt"—the farming region stretching from Ohio to Nebraska—is the most photosynthetically active place on the planet. Although it stands to reason that a cornfield during growing season would be actively undergoing photosynthesis, the high-resolution measurements from a satellite enabled global comparison to other plant-heavy regions—such as tropical rainforests.

"Before, when people used the greenness index to represent active photosynthesis, they had trouble determining the productivity of very dense plant areas, such as forests or cornfields. With enough green plant material in the field of view, these greenness indexes can saturate; they reach a maximum value they can't exceed," Frankenberg says. Because of the sensitivity of the SIF measurements, researchers can now compare the true productivity of fields from different regions without this saturation—information that could potentially be used to compare the efficiency of farming practices around the world.

Now that OCO-2 is online and producing data, Frankenberg says that it is capable of achieving higher resolution than the preliminary experiments with GOSAT. Therefore, OCO-2 will be able to provide an even clearer picture of plant productivity worldwide. However, to get more specific information about how plants influence the global carbon cycle, an evenly distributed ground-based network of spectrometers will be needed. Such a network—located down among the plants rather than miles above—will provide more information about regional uptake of carbon dioxide via photosynthesis and the mechanistic link between SIF and actual carbon exchange.

One existing network, called FLUXNET, uses ground-based towers to measure the exchange of carbon dioxide, or carbon flux, between the land and the atmosphere from towers at more than 600 locations worldwide. However, the towers only measure the exchange of carbon dioxide and are unable to directly observe the activities of the biosphere that drive this exchange.

The new ground-based measurements will ideally take place at existing FLUXNET sites, but they will be performed with a small set of high-resolution spectrometers—similar to the kind that OCO-2 uses—to allow the researchers to use the same measurement principles they developed for space. The revamped ground network was initially proposed in a 2012 workshop at the Keck Institute for Space Studies and is expected to go online sometime in the next two years.

In the future, a clear picture of global plant productivity could influence a range of decisions relevant to farmers, commodity traders, and policymakers. "Right now, the SIF data we can gather from space is too coarse of a picture to be really helpful for these conversations, but, in principle, with the satellite and ground-based measurements you could track the fluorescence in fields at different times of day," he says. This hourly tracking would not only allow researchers to detect the productivity of the plants, but it could also spot the first signs of plant stress—a factor that impacts crop prices and food security around the world.

"The measurements of SIF from OCO-2 greatly extend the science of this mission", says Paul Wennberg, R. Stanton Avery Professor of Atmospheric Chemistry and Environmental Science and Engineering, director of the Ronald and Maxine Linde Center for Global Environmental Science, and a member of the OCO-2 science team. "OCO-2 was designed to map carbon dioxide, and scientists plan to use these measurements to determine the underlying sources and sinks of this important gas. The new SIF measurements will allow us to diagnose the efficiency of the plants—a key component of the sinks of carbon dioxide."

By using OCO-2 to diagnose plant activity around the globe, this new research could also contribute to understanding the variability in crop primary productivity and also, eventually, the development of technologies that can improve crop efficiency—a goal that could greatly benefit humankind, Frankenberg says.

This project is funded by the Keck Institute for Space Studies and JPL. Wennberg is also an executive officer for the Environmental Science and Engineering (ESE) program. ESE is a joint program of the Division of Engineering and Applied Science, the division of Chemistry and Chemical Engineering, and the Division of Geological and Planetary Sciences.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

How an RNA Gene Silences a Whole Chromosome

Researchers at Caltech have discovered how an abundant class of RNA genes, called long non-coding RNAs (lncRNAs, pronounced link RNAs) can regulate key genes. By studying an important lncRNA, called Xist, the scientists identified how this RNA gathers a group of proteins and ultimately prevents women from having an extra functional X-chromosome—a condition in female embryos that leads to death in early development. These findings mark the first time that researchers have uncovered the detailed mechanism of action for lncRNA genes.

"For years, we thought about genes as just DNA sequences that encode proteins, but those genes only make up about 1 percent of the genome. Mammalian genomes also encode many thousands of lncRNAs," says Assistant Professor of Biology Mitch Guttman, who led the study published online in the April 27 issue of the journal Nature. These lncRNAs such as Xist play a structural role, acting to scaffold—or bring together and organize—the key proteins involved in cellular and molecular processes, such as gene expression and stem cell differentiation.

Guttman, who helped to discover an entire class of lncRNAs as a graduate student at MIT in 2009, says that although most of these genes encoded in our genomes have only recently been appreciated, there are several specific examples of lncRNA genes that have been known for decades. One well-studied example is Xist, which is important for a process called X chromosome inactivation.

All females are born with two X chromosomes in every cell, one inherited from their mother and one from their father. In contrast, males only contain one X chromosome (along with a Y chromosome). However, like males, females only need one copy of each X-chromosome gene—having two copies is an abnormality that will lead to death early during development. The genome skirts these problems by essentially "turning off" one X chromosome in every cell.

Previous research showed that Xist is essential to this process and does this by somehow preventing transcription, the initial step of the expression of genes on the X chromosome. However, because Xist is not a traditional protein-coding gene, until now researchers have had trouble figuring out exactly how Xist stops transcription and shuts down an entire chromosome.

"To start to make sense of what makes lncRNAs special and how they can control all of these different cellular processes, we need to be able to understand the mechanism of how any lncRNA gene can work. Because Xist is such an important molecule and because so much is known about what it does, it seemed like a great system to try to dissect the mechanisms of how it and other lncRNAs work," Guttman says.

lncRNAs are known to corral and organize the proteins that are necessary for cellular processes, so Guttman and his colleagues began their study of the function of Xist by first developing a technique to find out what proteins it naturally interacts with in the cell. With a new method, called RNA antisense purification with mass spectrometry (RAP-MS), the researchers extracted and purified Xist lncRNA molecules, as well as the proteins that directly interact with Xist, from mouse embryonic stem cells. Then, collaborators at the Proteome Exploration Laboratory at Caltech applied a technique called quantitative mass spectrometry to identify those interacting proteins.

"RNA usually only obeys one rule: binding to proteins. RAP-MS is like a molecular microscope into identifying RNA-protein interactions," says John Rinn, associate professor of stem cell and regenerative biology at Harvard University, who was not involved in the study. "RAP-MS will provide critically needed insights into how lncRNAs function to organize proteins and in turn regulate gene expression."

Applying this to Xist uncovered 10 specific proteins that interact with Xist. Of these, three—SAF-A (Scaffold attachment factor-A), LBR (Lamin B Receptor), and SHARP (SMRT and HDAC associated repressor protein)—are essential for X chromosome inactivation. "Before this experiment," Guttman says, "no one knew a single protein that was required by Xist for silencing transcription on the X chromosome, but with this method we immediately found three that are essential. If you lose any one of them, Xist doesn't work—it will not silence the X chromosome during development."

The new findings provide the first detailed picture of how lncRNAs work within a cellular process. Through further analysis, the researchers found that these three proteins performed three distinct, but essential, roles. SAF-A helps to tether Xist and all of its hitchhiking proteins to the DNA of the X chromosome, at which point LBR remodels the chromosome so that it is less likely to be expressed. The actual "silencing," Guttman and his colleagues discovered, is done by the third protein of the trio: SHARP.

To produce functional proteins from the DNA (genes) of a chromosome, the genes must first be transcribed into RNA by an enzyme called RNA polymerase II. Guttman and his team found that SHARP leads to the exclusion of polymerase from the DNA, thus preventing transcription and gene expression.

This information soon may have clinical applications. The Xist lncRNA silences the X chromosome simply because it is located on the X chromosome. However, previous studies have demonstrated that this RNA and its silencing machinery can be used to inactivate other chromosomes—for example, the third copy of chromosome 21 that is present in individuals with Downs' syndrome.

"We are starting to pick apart how lncRNAs work. We now know, for example, how Xist localizes to sites on X, how it silences transcription, and how it can change DNA structure," Guttman says. "One of the things that is really exciting for me is that we can potentially leverage the principles used by lncRNAs, move them around in the genome, and use them as therapeutic agents to target specific defective pathways in disease."

"But I think the real reason why this is so important for our field and even beyond is because this is a different type of regulation than we've seen before in the cell—it is a vast world that we previously knew nothing about," he adds.

This work was published in a recent paper titled: "The Xist lncRNA interacts directly with SHARP to silence transcription through HDAC3." The co-first authors of the paper are Caltech postdoctoral scholar Colleen A. McHugh and graduate student Chun-Kan Chen. Other coauthors from Caltech are Amy Chow, Christine F. Surka, Christina Tran, Mario Blanco, Christina Burghard, Annie Moradian, Alexander A. Shishkin, Julia Su, Michael J. Sweredoski, and Sonja Hess from the Proteome Exploration Laboratory. Additional authors include Amy Pandya-Jones and Kathrin Plath from UCLA and Patrick McDonel from MIT.

The study was supported by funding from the Gordon and Betty Moore Foundation, the Beckman Institute, the National Institutes of Health, the Rose Hills Foundation, the Edward Mallinckrodt Foundation, the Sontag Foundation, and the Searle Scholars Program.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Weighing—and Imaging—Molecules One at a Time

Building on their creation of the first-ever mechanical device that can measure the mass of individual molecules, one at a time, a team of Caltech scientists and their colleagues have created nanodevices that can also reveal their shape. Such information is crucial when trying to identify large protein molecules or complex assemblies of protein molecules.

"You can imagine that with large protein complexes made from many different, smaller subunits there are many ways for them to be assembled. These can end up having quite similar masses while actually being different species with different biological functions. This is especially true with enzymes, proteins that mediate chemical reactions in the body, and membrane proteins that control a cell's interactions with its environment," explains Michael Roukes, the Robert M. Abbey Professor of Physics, Applied Physics, and Bioengineering at Caltech and the co-corresponding author of a paper describing the technology that appeared March 30 in the online issue of the journal Nature Nanotechnology.

One foundation of the genomics revolution has been the ability to replicate DNA or RNA molecules en masse using the polymerase chain reaction to create the many millions of copies necessary for typical sequencing and analysis. However, the same mass-production technology does not work for copying proteins. Right now, if you want to properly identify a particular protein, you need a lot of it—typically millions of copies of just the protein of interest, with very few other extraneous proteins as contaminants. The average mass of this molecular population is then evaluated with a technique called mass spectrometry, in which the molecules are ionized—so that they attain an electrical charge—and then allowed to interact with an electromagnetic field. By analyzing this interaction, scientists can deduce the molecular mass-to-charge ratio.

But mass spectrometry often cannot discriminate subtle but crucial differences in molecules having similar mass-to-charge ratios. "With mass spectrometry today," explains Roukes, "large molecules and molecular complexes are first chopped up into many smaller pieces, that is, into smaller molecule fragments that existing instruments can handle. These different fragments are separately analyzed, and then bioinformatics—involving computer simulations—are used to piece the puzzle back together. But this reassembly process can be thwarted if pieces of different complexes are mixed up together."

With their devices, Roukes and his colleagues can measure the mass of an individual intact molecule. Each device—which is only a couple millionths of a meter in size or smaller—consists of a vibrating structure called a nanoelectromechanical system (NEMS) resonator. When a particle or molecule lands on the nanodevice, the added mass changes the frequency at which the structure vibrates, much like putting drops of solder on a guitar string would change the frequency of its vibration and resultant tone. The induced shifts in frequency provide information about the mass of the particle. But they also, as described in the new paper, can be used to determine the three-dimensional spatial distribution of the mass: i.e., the particle's shape.

"A guitar string doesn't just vibrate at one frequency," Roukes says. "There are harmonics of its fundamental tone, or so-called vibrational modes. What distinguishes a violin string from a guitar string is really the different admixtures of these different harmonics of the fundamental tone. The same applies here. We have a whole bunch of different tones that can be excited simultaneously on each of our nanodevices, and we track many different tones in real time. It turns out that when the molecule lands in different orientations, those harmonics are shifted differently. We can then use the inertial imaging theory that we have developed to reconstruct an image in space of the shape of the molecule."

"The new technique uncovers a previously unrealized capability of mechanical sensors," says Professor Mehmet Selim Hanay of Bilkent University in Ankara, Turkey, a former postdoctoral researcher in the Roukes lab and co-first author of the paper. "Previously we've identified molecules, such as the antibody IgM, based solely on their molecular weights. Now, by enabling both the molecular weight and shape information to be deduced for the same molecule simultaneously, the new technique can greatly enhance the identification process, and this is of significance both for basic research and the pharmaceutical industry." 

Currently, molecular structures are deciphered using X-ray crystallography, an often laborious technique that involves isolating, purifying, and then crystallizing molecules, and then evaluating their shape based on the diffraction patterns produced when x-rays interact with the atoms that together form the crystals. However, many complex biological molecules are difficult if not impossible to crystallize. And, even when they can be crystallized, the molecular structure obtained represents the molecule in the crystalline state, which can be very different from the structure of the molecule in its biologically active form.

"You can imagine situations where you don't know exactly what you are looking for—where you are in discovery mode, and you are trying to figure out the body's immune response to a particular pathogen, for example," Roukes says. In these cases, the ability to carry out single-molecule detection and to get as many separate bits of information as possible about that individual molecule greatly improves the odds of making a unique identification.

"We say that cancer begins often with a single aberrant cell, and what that means is that even though it might be one of a multiplicity of similar cells, there is something unique about the molecular composition of that one cell. With this technique, we potentially have a new tool to figure out what is unique about it," he adds.

So far, the new technique has been validated using particles of known sizes and shapes, such as polymer nanodroplets. Roukes and colleagues show that with today's state-of-the-art nanodevices, the approach can provide molecular-scale resolution—that is, provide the ability to see the molecular subcomponents of individual, intact protein assemblies. The group's current efforts are now focused on such explorations.

Scott Kelber, a former graduate student in the Roukes lab, is the other co-first author of the paper, titled "Inertial imaging with nanoelectromechanical systems." Professor John Sader of the University of Melbourne, Australia, and a visiting associate in physics at Caltech, is the co-corresponding author. Additional coauthors are Cathal D. O'Connell and Paul Mulvaney of the University of Melbourne. The work was funded by a National Institutes of Health Director's Pioneer award, a Caltech Kavli Nanoscience Institute Distinguished Visiting Professorship, the Fondation pour la Recherche et l'Enseignement Superieur in Paris, and the Australian Research Council grants scheme.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Teaser Image: 

Chemists Create “Comb” that Detects Terahertz Waves with Extreme Precision

Light can come in many frequencies, only a small fraction of which can be seen by humans. Between the invisible low-frequency radio waves used by cell phones and the high frequencies associated with infrared light lies a fairly wide swath of the electromagnetic spectrum occupied by what are called terahertz, or sometimes submillimeter, waves. Exploitation of these waves could lead to many new applications in fields ranging from medical imaging to astronomy, but terahertz waves have proven tricky to produce and study in the laboratory. Now, Caltech chemists have created a device that generates and detects terahertz waves over a wide spectral range with extreme precision, allowing it to be used as an unparalleled tool for measuring terahertz waves.

The new device is an example of what is known as a frequency comb, which uses ultrafast pulsed lasers, or oscillators, to produce thousands of unique frequencies of radiation distributed evenly across a spectrum like the teeth of a comb. Scientists can then use them like rulers, lining up the teeth like tick marks to very precisely measure light frequencies. The first frequency combs, developed in the 1990s, earned their creators (John Hall of JILA and Theordor Hánsch of the Max Planck Institute of Quantum Optics and Ludwig Maximilians University Munich) the 2005 Nobel Prize in physics. These combs, which originated in the visible part of the spectrum, have revolutionized how scientists measure light, leading, for example, to the development of today's most accurate timekeepers, known as optical atomic clocks.

The team at Caltech combined commercially available lasers and optics with custom-built electronics to extend this technology to the terahertz, creating a terahertz frequency comb with an unprecedented combination of spectral coverage and precision. Its thousands of "teeth" are evenly spaced across the majority of the terahertz region of the spectrum (0.15-2.4 THz), giving scientists a way to simultaneously measure absorption in a sample at all of those frequencies.

The work is described in a paper that appears in the online version of the journal Physical Review Letters and will be published in the April 24 issue. The lead author is graduate student and National Science Foundation fellow Ian Finneran, who works in the lab of Geoffrey A. Blake, professor of cosmochemistry and planetary sciences and professor of chemistry at Caltech.

Blake explains the utility of the new device, contrasting it with a common radio tuner. "With radio waves, most tuners let you zero in on and listen to just one station, or frequency, at a time," he says. "Here, in our terahertz approach, we can separate and process more than 10,000 frequencies all at once. In the near future, we hope to bump that number up to more than 100,000."

That is important because the terahertz region of the spectrum is chock-full of information. Everything in the universe that is warmer than about 10 degrees Kelvin (-263 degrees Celsius) gives off terahertz radiation. Even at these very low temperatures molecules can rotate in space, yielding unique fingerprints in the terahertz. Astronomers using telescopes such as Caltech's Submillimeter Observatory, the Atacama Large Millimeter Array, and the Herschel Space Observatory are searching stellar nurseries and planet-forming disks at terahertz frequencies, looking for such chemical fingerprints to try to determine the kinds of molecules that are present and thus available to planetary systems. But in just a single chunk of the sky, it would not be unusual to find signatures of 25 or more different molecules.

To be able to definitively identify specific molecules within such a tangle of terahertz signals, scientists first need to determine exact measurements of the chemical fingerprints associated with various molecules. This requires a precise source of terahertz waves, in addition to a sensitive detector, and the terahertz frequency comb is ideal for making such measurements in the lab.

"When we look up into space with terahertz light, we basically see this forest of lines related to the tumbling motions of various molecules," says Finneran. "Unraveling and understanding these lines is difficult, as you must trek across that forest one point and one molecule at a time in the lab. It can take weeks, and you would have to use many different instruments. What we've developed, this terahertz comb, is a way to analyze the entire forest all at once."

After the device generates its tens of thousands of evenly spaced frequencies, the waves travel through a sample—in the paper, the researchers provide the example of water vapor. The instrument then measures what light passes through the sample and what gets absorbed by molecules at each tooth along the comb. If a detected tooth gets shorter, the sample absorbed that particular terahertz wave; if it comes through at the baseline height, the sample did not absorb at that frequency.

"Since we know exactly where each of the tick marks on our ruler is to about nine digits, we can use this as a diagnostic tool to get these frequencies really, really precisely," says Finneran. "When you look up in space, you want to make sure that you have such very exact measurements from the lab."

In addition to the astrochemical application of identifying molecules in space, the terahertz comb will also be useful for studying fundamental interactions between molecules. "The terahertz is unique in that it is really the only direct way to look not only at vibrations within individual large molecules that are important to life, but also at vibrations between different molecules that govern the behavior of liquids such as water," says Blake.

Additional coauthors on the paper, "Decade-Spanning High-Precision Terahertz Frequency Comb," include current Caltech graduate students Jacob Good, P. Brandon Carroll, and Marco Allodi, as well as recent graduate Daniel Holland (PhD '14). The work was supported by funding from the National Science Foundation.

Writer: 
Kimm Fesenmaier
Frontpage Title: 
“Combing” Through Terahertz Waves
Listing Title: 
“Combing” Through Terahertz Waves
Contact: 
Writer: 
Exclude from News Hub: 
No
Short Title: 
“Combing” Through Terahertz Waves
News Type: 
Research News

More Money, Same Bankruptcy Risk

In general, our financial lives follow a pattern of spending and saving described by a time-honored model that economists call the life-cycle hypothesis. Most people begin their younger years strapped for cash, earning little money while also investing heavily in skills and education. As the years go by, career advances result in higher income, which can be used to pay off debts incurred early on and to save for retirement. Indeed everyone is well aware that later in life earnings will drop and spending will outpace savings.

But how does the life-cycle hypothesis hold up when the income pattern is reversed—such as in the case of young, multimillionaire NFL players who earn large sums at first, but then experience drastic income reductions in retirement just a few years later? Not too well, a new Caltech study suggests.

The study, led by Colin Camerer, Robert Kirby Professor of Behavioral Economics, was published as a working paper on April 13 by the National Bureau of Economic Research.

"The life-cycle hypothesis in economics assumes people have perfect willpower and are realistic about how long their careers will last. Behavioral economics predicts something different, that even NFL players earning huge salaries will struggle to save enough," Camerer says.

"We wanted to test this theory with NFL players because there is a lot of tension between their income in the present, as a player, and their expected income in the future, after retirement. NFL players put the theory to a really extreme test," says graduate student Kyle Carlson, the first author of the study. "We suspected that NFL players' behavior might differ from the theory because they may be too focused on the present or overconfident about their career prospects. We had also seen many media reports of players struggling with their finances."

A professional football player's career is not like that of the average person. Rather than finding an entry-level job that pays a pittance when just out of college, a football player can earn millions of dollars—more than the average person makes in an entire lifetime—in just one season. However, the young athlete's lucrative career is also likely to be short-lived. After just a few years, most pro football players are out of the game with injuries and are forced into retirement and, usually, a much smaller income. And that is when the financial troubles often begin to surface.

The researchers decided to see how the life-cycle model would respond in such a feast-or-famine income situation. They entered the publicly available income data from NFL players into a simulation to predict how well players should fare in retirement, based on their income and the model. The simulations suggested that the players' initial earnings should support them through their entire retirement. In other words, these players should never go bankrupt.

However, when the researchers looked at what actually happens, they found that approximately 2 percent of players have filed for bankruptcy within just two years of retirement, and more than 15 percent file within 12 years after retirement. "Two percent is not itself an enormous number. But the players look similar to regular people who are making way less money," Carlson says. "The players have the capacity to avoid bankruptcy by planning carefully, but many are not doing that."

Interestingly, Carlson and his colleagues also determined that a player's career earnings and time in the league had no effect on the risk of bankruptcy. That is, although a player who earned $20 million over a 10-year career should have substantially more money to support his retirement, he actually is just as likely to go bankrupt as someone who only earned $2 million in one year. Regardless of career length, the risk of bankruptcy was about the same. "It stands to reason that making more money should protect you from bankruptcy, but for these guys it doesn't," Carlson says.

The results of the study are clear: the life-cycle model does not seem to match up with the income spikes and dips of a career athlete. The cause of this disconnect between theory and reality, however, is less apparent, Carlson says.

"There are many reasons why the players may struggle to manage their high incomes," says Carlson. For example, the players, many of whom are drafted directly out of college, often do not have any experience in business or finance. Many come from economically disadvantaged backgrounds. In addition, players may be pressured to spend by other high-earning teammates.

This work raises questions for future research both for behavioral economists and for scholars of personal finance. Because football players, by nature, might be more willing to take risks than the average person, are they also more willing also make risky financial decisions? Are football players perhaps saving for retirement early in their careers, but later using bankruptcy as a tool to eliminate spending debt?

"Indeed it may well be that these high rates of bankruptcies are partly driven by the risk attitudes of football players and partly driven by regulatory practices that shield retirements assets from bankruptcy procedures," says Jean-Laurent Rosenthal, the Rea A. and Lela G. Axline Professor of Business Economics and chair of the Division of Humanities and Social Sciences, who also specializes in the field of behavioral economics.

"These results don't say why the players have a higher incidence of bankruptcy than the model would predict. We plan to investigate that in the future with additional modeling and data," Carlson says. "The one thing that we know right now is that there's something going on with these players that is different from what's in the model."

The study was published in a working paper titled, "Bankruptcy Rates among NFL Players with Short-Lived Income Spikes." In addition to Carlson and Camerer, additional coauthors include Joshua Kim from the University of Washington and Annamaria Lusardi of the George Washington University. Camerer's work is supported by a grant from the MacArthur Foundation.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

An Earthquake Warning System in Our Pockets?

Researchers Test Smartphones for Advance-Notice System

While you are checking your email, scrolling through social-media feeds, or just going about your daily life with your trusty smartphone in your pocket, the sensors in that little computer could also be contributing to an earthquake early warning system. So says a new study led by researchers at Caltech and the United States Geological Survey (USGS). The study suggests that all of our phones and other personal electronic devices could function as a distributed network, detecting any ground movements caused by a large earthquake, and, ultimately, giving people crucial seconds to prepare for a temblor.

"Crowd-sourced alerting means that the community will benefit by data generated by the community," said Sarah Minson (PhD '10), a USGS geophysicist and lead author of the study, which appears in the April 10 issue of the new journal Science Advances. Minson completed the work while a postdoctoral scholar at Caltech in the laboratory of Thomas Heaton, professor of engineering seismology.

Earthquake early warning (EEW) systems detect the start of an earthquake and rapidly transmit warnings to people and automated systems before they experience shaking at their location. While much of the world's population is susceptible to damaging earthquakes, EEW systems are currently operating in only a few regions around the globe, including Japan and Mexico. "Most of the world does not receive earthquake warnings mainly due to the cost of building the necessary scientific monitoring networks," says USGS geophysicist and project lead Benjamin Brooks.

Despite being less accurate than scientific-grade equipment, the GPS receivers in smartphones are sufficient to detect the permanent ground movement, or displacement, caused by fault motion in earthquakes that are approximately magnitude 7 and larger. And, of course, they are already widely distributed. Once displacements are detected by participating users' phones, the collected information could be analyzed quickly in order to produce customized earthquake alerts that would then be transmitted back to users.

"Thirty years ago it took months to assemble a crude picture of the deformations from an earthquake. This new technology promises to provide a near-instantaneous picture with much greater resolution," says Heaton, a coauthor of the new study.

In the study, the researchers tested the feasibility of crowd-sourced EEW with a simulation of a hypothetical magnitude 7 earthquake, and with real data from the 2011 magnitude 9 Tohoku-oki, Japan earthquake. The results show that crowd-sourced EEW could be achieved with only a tiny percentage of people in a given area contributing information from their smartphones. For example, if phones from fewer than 5,000 people in a large metropolitan area responded, the earthquake could be detected and analyzed fast enough to issue a warning to areas farther away before the onset of strong shaking.

The researchers note that the GPS receivers in smartphones and similar devices would not be sufficient to detect earthquakes smaller than magnitude 7, which could still be potentially damaging. However, smartphones also have microelectromechanical systems (MEMS) accelerometers that are capable of recording any earthquake motions large enough to be felt; this means that smartphones may be useful in earthquakes as small as magnitude 5. In a separate project, Caltech's Community Seismic Network Project has been developing the framework to record and utilize data from an inexpensive array of such MEMS accelerometers.

Comprehensive EEW requires a dense network of scientific instruments. Scientific-grade EEW, such as the USGS's ShakeAlert system that is currently being implemented on the west coast of the United States, will be able to help minimize the impact of earthquakes over a wide range of magnitudes. However, in many parts of the world where there are insufficient resources to build and maintain scientific networks but consumer electronics are increasingly common, crowd-sourced EEW has significant potential.

"The U.S. earthquake early warning system is being built on our high-quality scientific earthquake networks, but crowd-sourced approaches can augment our system and have real potential to make warnings possible in places that don't have high-quality networks," says Douglas Given, USGS coordinator of the ShakeAlert Earthquake Early Warning System. The U.S. Agency for International Development has already agreed to fund a pilot project, in collaboration with the Chilean Centro Sismólogico Nacional, to test a pilot hybrid earthquake warning system comprising stand-alone smartphone sensors and scientific-grade sensors along the Chilean coast.

"Crowd-sourced data are less precise, but for larger earthquakes that cause large shifts in the ground surface, they contain enough information to detect that an earthquake has occurred, information necessary for early warning," says study coauthor Susan Owen of JPL.

Additional coauthors on the paper, "Crowdsourced earthquake early warning," are from the USGS, Carnegie Mellon University–Silicon Valley, and the University of Houston. The work was supported in part by the Gordon and Betty Moore Foundation, the USGS Innovation Center for Earth Sciences, and the U.S. Department of Transportation Office of the Assistant Secretary for Research and Technology.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Explaining Saturn’s Great White Spots

Every 20 to 30 years, Saturn's atmosphere roils with giant, planet-encircling thunderstorms that produce intense lightning and enormous cloud disturbances. The head of one of these storms—popularly called "great white spots," in analogy to the Great Red Spot of Jupiter—can be as large as Earth. Unlike Jupiter's spot, which is calm at the center and has no lightning, the Saturn spots are active in the center and have long tails that eventually wrap around the planet.

Six such storms have been observed on Saturn over the past 140 years, alternating between the equator and midlatitudes, with the most recent emerging in December 2010 and encircling the planet within six months. The storms usually occur when Saturn's northern hemisphere is most tilted toward the sun. Just what triggers them and why they occur so infrequently, however, has been unclear.

Now, a new study by two Caltech planetary scientists suggests a possible cause for these storms. The study was published April 13 in the advance online issue of the journal Nature Geoscience.

Using numerical modeling, Professor of Planetary Science Andrew Ingersoll and his graduate student Cheng Li simulated the formation of the storms and found that they may be caused by the weight of the water molecules in the planet's atmosphere. Because these water molecules are heavy compared to the hydrogen and helium that comprise most of the gas-giant planet's atmosphere, they make the upper atmosphere lighter when they rain out, and that suppresses convection.

Over time, this leads to a cooling of the upper atmosphere. But that cooling eventually overrides the suppressed convection, and warm moist air rapidly rises and triggers a thunderstorm. "The upper atmosphere is so cold and so massive that it takes 20 to 30 years for this cooling to trigger another storm," says Ingersoll.

Ingersoll and Li found that this mechanism matches observations of the great white spot of 2010 taken by NASA's Cassini spacecraft, which has been observing Saturn and its moons since 2004.

The researchers also propose that the absence of planet-encircling storms on Jupiter could be explained if Jupiter's atmosphere contains less water vapor than Saturn's atmosphere. That is because saturated gas (gas that contains the maximum amount of moisture that it can hold at a particular temperature) in a hydrogen-helium atmosphere goes through a density minimum as it cools. That is, it first becomes less dense as the water precipitates out, and then it becomes more dense as cooling proceeds further. "Going through that minimum is key to suppressing the convection, but there has to be enough water vapor to start with," says Li.

Ingersoll and Li note that observations by the Galileo spacecraft and the Hubble Space Telescope indicate that Saturn does indeed have enough water to go through this density minimum, whereas Jupiter does not. In November 2016, NASA's Juno spacecraft, now en route to Jupiter, will start measuring the water abundance on that planet. "That should help us understand not only the meteorology but also the planet's formation, since water is expected to be the third most abundant molecule after hydrogen and helium in a giant planet atmosphere," Ingersoll says.

The work in the paper, "Moist convection in hydrogen atmospheres and the frequency of Saturn's giant storms," was supported by the National Science Foundation and the Cassini Project of NASA.

Writer: 
Kathy Svitil
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Microbes Help Produce Serotonin in Gut

Although serotonin is well known as a brain neurotransmitter, it is estimated that 90 percent of the body's serotonin is made in the digestive tract. In fact, altered levels of this peripheral serotonin have been linked to diseases such as irritable bowel syndrome, cardiovascular disease, and osteoporosis. New research at Caltech, published in the April 9 issue of the journal Cell, shows that certain bacteria in the gut are important for the production of peripheral serotonin.

"More and more studies are showing that mice or other model organisms with changes in their gut microbes exhibit altered behaviors," explains Elaine Hsiao, research assistant professor of biology and biological engineering and senior author of the study. "We are interested in how microbes communicate with the nervous system. To start, we explored the idea that normal gut microbes could influence levels of neurotransmitters in their hosts."

Peripheral serotonin is produced in the digestive tract by enterochromaffin (EC) cells and also by particular types of immune cells and neurons. Hsiao and her colleagues first wanted to know if gut microbes have any effect on serotonin production in the gut and, if so, in which types of cells. They began by measuring peripheral serotonin levels in mice with normal populations of gut bacteria and also in germ-free mice that lack these resident microbes.

The researchers found that the EC cells from germ-free mice produced approximately 60 percent less serotonin than did their peers with conventional bacterial colonies. When these germ-free mice were recolonized with normal gut microbes, the serotonin levels went back up—showing that the deficit in serotonin can be reversed.

"EC cells are rich sources of serotonin in the gut. What we saw in this experiment is that they appear to depend on microbes to make serotonin—or at least a large portion of it," says Jessica Yano, first author on the paper and a research technician working with Hsiao.

The researchers next wanted to find out whether specific species of bacteria, out of the diverse pool of microbes that inhabit the gut, are interacting with EC cells to make serotonin.

After testing several different single species and groups of known gut microbes, Yano, Hsiao, and colleagues observed that one condition—the presence of a group of approximately 20 species of spore-forming bacteria—elevated serotonin levels in germ-free mice. The mice treated with this group also showed an increase in gastrointestinal motility compared to their germ-free counterparts, and changes in the activation of blood platelets, which are known to use serotonin to promote clotting.

Wanting to home in on mechanisms that could be involved in this interesting collaboration between microbe and host, the researchers began looking for molecules that might be key. They identified several particular metabolites—products of the microbes' metabolism—that were regulated by spore-forming bacteria and that elevated serotonin from EC cells in culture. Furthermore, increasing these metabolites in germ-free mice increased their serotonin levels.

Previous work in the field indicated that some bacteria can make serotonin all by themselves. However, this new study suggests that much of the body's serotonin relies on particular bacteria that interact with the host to produce serotonin, says Yano. "Our work demonstrates that microbes normally present in the gut stimulate host intestinal cells to produce serotonin," she explains.

"While the connections between the microbiome and the immune and metabolic systems are well appreciated, research into the role gut microbes play in shaping the nervous system is an exciting frontier in the biological sciences," says Sarkis K. Mazmanian, Luis B. and Nelly Soux Professor of Microbiology and a coauthor on the study. "This work elegantly extends previous seminal research from Caltech in this emerging field".

Additional coauthor Rustem Ismagilov, the Ethel Wilson Bowles and Robert Bowles Professor of Chemistry and Chemical Engineering, adds, "This work illustrates both the richness of chemical interactions between the hosts and their microbial communities, and Dr. Hsiao's scientific breadth and acumen in leading this work."

Serotonin is important for many aspects of human health, but Hsiao cautions that much more research is needed before any of these findings can be translated to the clinic.

"We identified a group of bacteria that, aside from increasing serotonin, likely has other effects yet to be explored," she says. "Also, there are conditions where an excess of peripheral serotonin appears to be detrimental."

Although this study was limited to serotonin in the gut, Hsiao and her team are now investigating how this mechanism might also be important for the developing brain. "Serotonin is an important neurotransmitter and hormone that is involved in a variety of biological processes. The finding that gut microbes modulate serotonin levels raises the interesting prospect of using them to drive changes in biology," says Hsiao.

The work was published in an article titled "Indigenous Bacteria from the Gut Microbiota Regulate Host Serotonin Biosynthesis." In addition to Hsiao, Yano, Mazmanian, and Ismagilov, other Caltech coauthors include undergraduates Kristie Yu, Gauri Shastri, and Phoebe Ann; graduate student Gregory Donaldson; postdoctoral scholar Liang Ma. Additional coauthor Cathryn Nagler is from the University of Chicago.

This work was funded by an NIH Director's Early Independence Award and a Caltech Center for Environmental Microbial Interactions Award, both to Hsiao. The study was also supported by NSF, NIDDK, and NIMH grants to Mazmanian, NSF EFRI and NHGRI grants to Ismagilov, and grants from the NIAID and Food Allergy Research and Education and University of Chicago Digestive Diseases Center Core to Nagler.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

New Camera Chip Provides Superfine 3-D Resolution

Imagine you need to have an almost exact copy of an object. Now imagine that you can just pull your smartphone out of your pocket, take a snapshot with its integrated 3-D imager, send it to your 3-D printer, and within minutes you have reproduced a replica accurate to within microns of the original object. This feat may soon be possible because of a new, tiny high-resolution 3-D imager developed at Caltech.

Any time you want to make an exact copy of an object with a 3-D printer, the first step is to produce a high-resolution scan of the object with a 3-D camera that measures its height, width, and depth. Such 3-D imaging has been around for decades, but the most sensitive systems generally are too large and expensive to be used in consumer applications.

A cheap, compact yet highly accurate new device known as a nanophotonic coherent imager (NCI) promises to change that. Using an inexpensive silicon chip less than a millimeter square in size, the NCI provides the highest depth-measurement accuracy of any such nanophotonic 3-D imaging device.

The work, done in the laboratory of Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering in the Division of Engineering and Applied Science, is described in the February 2015 issue of Optics Express.

In a regular camera, each pixel represents the intensity of the light received from a specific point in the image, which could be near or far from the camera—meaning that the pixels provide no information about the relative distance of the object from the camera. In contrast, each pixel in an image created by the Caltech team's NCI provides both the distance and intensity information. "Each pixel on the chip is an independent interferometer—an instrument that uses the interference of light waves to make precise measurements—which detects the phase and frequency of the signal in addition to the intensity," says Hajimiri.



Three dimensional map of the hills and valleys on a U.S. penny obtained with the nano-photonic coherent imager at the distance of 0.5 meters.

The new chip utilizes an established detection and ranging technology called LIDAR, in which a target object is illuminated with scanning laser beams. The light that reflects off of the object is then analyzed based on the wavelength of the laser light used, and the LIDAR can gather information about the object's size and its distance from the laser to create an image of its surroundings. "By having an array of tiny LIDARs on our coherent imager, we can simultaneously image different parts of an object or a scene without the need for any mechanical movements within the imager," Hajimiri says.

Such high-resolution images and information provided by the NCI are made possible because of an optical concept known as coherence. If two light waves are coherent, the waves have the same frequency, and the peaks and troughs of light waves are exactly aligned with one another. In the NCI, the object is illuminated with this coherent light. The light that is reflected off of the object is then picked up by on-chip detectors, called grating couplers, that serve as "pixels," as the light detected from each coupler represents one pixel on the 3-D image. On the NCI chip, the phase, frequency, and intensity of the reflected light from different points on the object is detected and used to determine the exact distance of the target point.

Because the coherent light has a consistent frequency and wavelength, it is used as a reference with which to measure the differences in the reflected light. In this way, the NCI uses the coherent light as sort of a very precise ruler to measure the size of the object and the distance of each point on the object from the camera. The light is then converted into an electrical signal that contains intensity and distance information for each pixel—all of the information needed to create a 3-D image.

The incorporation of coherent light not only allows 3-D imaging with the highest level of depth-measurement accuracy ever achieved in silicon photonics, it also makes it possible for the device to fit in a very small size. "By coupling, confining, and processing the reflected light in small pipes on a silicon chip, we were able to scale each LIDAR element down to just a couple of hundred microns in size—small enough that we can form an array of 16 of these coherent detectors on an active area of 300 microns by 300 microns," Hajimiri says.

The first proof of concept of the NCI has only 16 coherent pixels, meaning that the 3-D images it produces can only be 16 pixels at any given instance. However, the researchers also developed a method for imaging larger objects by first imaging a four-pixel-by-four-pixel section, then moving the object in four-pixel increments to image the next section. With this method, the team used the device to scan and create a 3-D image of the "hills and valleys" on the front face of a U.S. penny—with micron-level resolution—from half a meter away.

In the future, Hajimiri says, the current array of 16 pixels could also be easily scaled up to hundreds of thousands. One day, by creating such vast arrays of these tiny LIDARs, the imager could be applied to a broad range of applications from very precise 3-D scanning and printing to helping driverless cars avoid collisions to improving motion sensitivity in superfine human machine interfaces, where the slightest movements of a patient's eyes and the most minute changes in a patient's heartbeat can be detected on the fly.

"The small size and high quality of this new chip-based imager will result in significant cost reductions, which will enable thousands of new uses for such systems by incorporating them into personal devices such as smartphones," he says.

The study was published in a paper titled, "Nanophotonic coherent imager." In addition to Hajimiri, other Caltech coauthors include former postdoctoral scholar and current assistant professor at the University of Pennsylvania, Firooz Aflatouni, graduate student Behrooz Abiri, and Angad Rekhi (BS '14). This work was partially funded by Caltech Innovation Initiative.

Listing Title: 
Superfine 3-D Resolution on Your Smartphone?
Contact: 
Writer: 
Exclude from News Hub: 
No
Short Title: 
Superfine 3-D Resolution on Your Smartphone?
News Type: 
Research News

New Research Suggests Solar System May Have Once Harbored Super-Earths

Caltech and UC Santa Cruz Researchers Say Earth Belongs to a Second Generation of Planets

Long before Mercury, Venus, Earth, and Mars formed, it seems that the inner solar system may have harbored a number of super-Earths—planets larger than Earth but smaller than Neptune. If so, those planets are long gone—broken up and fallen into the sun billions of years ago largely due to a great inward-and-then-outward journey that Jupiter made early in the solar system's history.

This possible scenario has been suggested by Konstantin Batygin, a Caltech planetary scientist, and Gregory Laughlin of UC Santa Cruz in a paper that appears the week of March 23 in the online edition of the Proceedings of the National Academy of Sciences (PNAS). The results of their calculations and simulations suggest the possibility of a new picture of the early solar system that would help to answer a number of outstanding questions about the current makeup of the solar system and of Earth itself. For example, the new work addresses why the terrestrial planets in our solar system have such relatively low masses compared to the planets orbiting other sun-like stars.

"Our work suggests that Jupiter's inward-outward migration could have destroyed a first generation of planets and set the stage for the formation of the mass-depleted terrestrial planets that our solar system has today," says Batygin, an assistant professor of planetary science. "All of this fits beautifully with other recent developments in understanding how the solar system evolved, while filling in some gaps."

Thanks to recent surveys of exoplanets—planets in solar systems other than our own—we know that about half of sun-like stars in our galactic neighborhood have orbiting planets. Yet those systems look nothing like our own. In our solar system, very little lies within Mercury's orbit; there is only a little debris—probably near-Earth asteroids that moved further inward—but certainly no planets. That is in sharp contrast with what astronomers see in most planetary systems. These systems typically have one or more planets that are substantially more massive than Earth orbiting closer to their suns than Mercury does, but very few objects at distances beyond.

"Indeed, it appears that the solar system today is not the common representative of the galactic planetary census. Instead we are something of an outlier," says Batygin. "But there is no reason to think that the dominant mode of planet formation throughout the galaxy should not have occurred here. It is more likely that subsequent changes have altered its original makeup."

According to Batygin and Laughlin, Jupiter is critical to understanding how the solar system came to be the way it is today. Their model incorporates something known as the Grand Tack scenario, which was first posed in 2001 by a group at Queen Mary University of London and subsequently revisited in 2011 by a team at the Nice Observatory. That scenario says that during the first few million years of the solar system's lifetime, when planetary bodies were still embedded in a disk of gas and dust around a relatively young sun, Jupiter became so massive and gravitationally influential that it was able to clear a gap in the disk. And as the sun pulled the disk's gas in toward itself, Jupiter also began drifting inward, as though carried on a giant conveyor belt.

"Jupiter would have continued on that belt, eventually being dumped onto the sun if not for Saturn," explains Batygin. Saturn formed after Jupiter but got pulled toward the sun at a faster rate, allowing it to catch up. Once the two massive planets got close enough, they locked into a special kind of relationship called an orbital resonance, where their orbital periods were rational—that is, expressible as a ratio of whole numbers. In a 2:1 orbital resonance, for example, Saturn would complete two orbits around the sun in the same amount of time that it took Jupiter to make a single orbit. In such a relationship, the two bodies would begin to exert a gravitational influence on one another.

"That resonance allowed the two planets to open up a mutual gap in the disk, and they started playing this game where they traded angular momentum and energy with one another, almost to a beat," says Batygin. Eventually, that back and forth would have caused all of the gas between the two worlds to be pushed out, a situation that would have reversed the planets' migration direction and sent them back outward in the solar system. (Hence, the "tack" part of the Grand Tack scenario: the planets migrate inward and then change course dramatically, something like a boat tacking around a buoy.)

In an earlier model developed by Bradley Hansen at UCLA, the terrestrial planets conveniently end up in their current orbits with their current masses under a particular set of circumstances—one in which all of the inner solar system's planetary building blocks, or planetesimals, happen to populate a narrow ring stretching from 0.7 to 1 astronomical unit (1 astronomical unit is the average distance from the sun to Earth), 10 million years after the sun's formation. According to the Grand Tack scenario, the outer edge of that ring would have been delineated by Jupiter as it moved toward the sun on its conveyor belt and cleared a gap in the disk all the way to Earth's current orbit.

But what about the inner edge? Why should the planetesimals be limited to the ring on the inside? "That point had not been addressed," says Batygin.

He says the answer could lie in primordial super-Earths. The empty hole of the inner solar system corresponds almost exactly to the orbital neighborhood where super-Earths are typically found around other stars. It is therefore reasonable to speculate that this region was cleared out in the primordial solar system by a group of first-generation planets that did not survive.

Batygin and Laughlin's calculations and simulations show that as Jupiter moved inward, it pulled all the planetesimals it encountered along the way into orbital resonances and carried them toward the sun. But as those planetesimals got closer to the sun, their orbits also became elliptical. "You cannot reduce the size of your orbit without paying a price, and that turns out to be increased ellipticity," explains Batygin. Those new, more elongated orbits caused the planetesimals, mostly on the order of 100 kilometers in radius, to sweep through previously unpenetrated regions of the disk, setting off a cascade of collisions among the debris. In fact, Batygin's calculations show that during this period, every planetesimal would have collided with another object at least once every 200 years, violently breaking them apart and sending them decaying into the sun at an increased rate.

The researchers did one final simulation to see what would happen to a population of super-Earths in the inner solar system if they were around when this cascade of collisions started. They ran the simulation on a well-known extrasolar system known as Kepler-11, which features six super-Earths with a combined mass 40 times that of Earth, orbiting a sun-like star. The result? The model predicts that the super-Earths would be shepherded into the sun by a decaying avalanche of planetesimals over a period of 20,000 years.

"It's a very effective physical process," says Batygin. "You only need a few Earth masses worth of material to drive tens of Earth masses worth of planets into the sun."

Batygin notes that when Jupiter tacked around, some fraction of the planetesimals it was carrying with it would have calmed back down into circular orbits. Only about 10 percent of the material Jupiter swept up would need to be left behind to account for the mass that now makes up Mercury, Venus, Earth, and Mars.

From that point, it would take millions of years for those planetesimals to clump together and eventually form the terrestrial planets—a scenario that fits nicely with measurements that suggest that Earth formed 100–200 million years after the birth of the sun. Since the primordial disk of hydrogen and helium gas would have been long gone by that time, this could also explain why Earth lacks a hydrogen atmosphere. "We formed from this volatile-depleted debris," says Batygin.

And that sets us apart in another way from the majority of exoplanets. Batygin expects that most exoplanets—which are mostly super-Earths—have substantial hydrogen atmospheres, because they formed at a point in the evolution of their planetary disk when the gas would have still been abundant. "Ultimately, what this means is that planets truly like Earth are intrinsically not very common," he says.

The paper also suggests that the formation of gas giant planets such as Jupiter and Saturn—a process that planetary scientists believe is relatively rare—plays a major role in determining whether a planetary system winds up looking something like our own or like the more typical systems with close-in super-Earths. As planet hunters identify additional systems that harbor gas giants, Batygin and Laughlin will have more data against which they can check their hypothesis—to see just how often other migrating giant planets set off collisional cascades in their planetary systems, sending primordial super-Earths into their host stars.

 The researchers describe their work in a paper titled "Jupiter's Decisive Role in the Inner Solar System's Early Evolution."

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Our Solar System May Have Once Harbored Super-Earths
Listing Title: 
Our Solar System May Have Once Harbored Super-Earths
Writer: 
Exclude from News Hub: 
No
Short Title: 
Super-Earths In Our Solar System?
News Type: 
Research News

Pages

Subscribe to RSS - research_news