Caltech Astronomers Observe a Supernova Colliding with Its Companion Star

Type Ia supernovae, one of the most dazzling phenomena in the universe, are produced when small dense stars called white dwarfs explode with ferocious intensity. At their peak, these supernovae can outshine an entire galaxy. Although thousands of supernovae of this kind were found in the last decades, the process by which a white dwarf becomes one has been unclear.

That began to change on May 3, 2014, when a team of Caltech astronomers working on a robotic observing system known as the intermediate Palomar Transient Factory (iPTF)—a multi-institute collaboration led by Shrinivas Kulkarni, the John D. and Catherine T. MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories—discovered a Type Ia supernova, designated iPTF14atg, in nearby galaxy IC831, located 300 million light-years away.

The data that were immediately collected by the iPTF team lend support to one of two competing theories about the origin of white dwarf supernovae, and also suggest the possibility that there are actually two distinct populations of this type of supernova.

The details are outlined in a paper with Caltech graduate student Yi Cao the lead author, appearing May 21 in the journal Nature.

Type Ia supernovae are known as "standardizable candles" because they allow astronomers to gauge cosmic distances by how dim they appear relative to how bright they actually are. It is like knowing that, from one mile away, a light bulb looks 100 times dimmer than another located only one-tenth of a mile away. This consistency is what made these stellar objects instrumental in measuring the accelerating expansion of the universe in the 1990s, earning three scientists the Nobel Prize in Physics in 2011.

There are two competing origin theories, both starting with the same general scenario: the white dwarf that eventually explodes is one of a pair of stars orbiting around a common center of mass. The interaction between these two stars, the theories say, is responsible for triggering supernova development. What is the nature of that interaction? At this point, the theories diverge.

According to one theory, the so-called double-degenerate model, the companion to the exploding white dwarf is also a white dwarf, and the supernova explosion initiates when the two similar objects merge.

However, in the second theory, called the single-degenerate model, the second star is instead a sunlike star—or even a red giant, a much larger type of star. In this model, the white dwarf's powerful gravity pulls, or accretes, material from the second star. This process, in turn, increases the temperature and pressure in the center of the white dwarf until a runaway nuclear reaction begins, ending in a dramatic explosion.

The difficulty in determining which model is correct stems from the facts that supernova events are very rare—occurring about once every few centuries in our galaxy—and that the stars involved are very dim before the explosions.

That is where the iPTF comes in. From atop Palomar Mountain in Southern California, where it is mounted on the 48-inch Samuel Oschin Telescope, the project's fully automated camera optically surveys roughly 1000 square degrees of sky per night (approximately 1/20th of the visible sky above the horizon), looking for transients—objects, including Type Ia supernovae, whose brightness changes over timescales that range from hours to days.

On May 3, the iPTF took images of IC831 and transmitted the data for analysis to computers at the National Energy Research Scientific Computing Center, where a machine-learning algorithm analyzed the images and prioritized real celestial objects over digital artifacts. Because this first-pass analysis occurred when it was nighttime in the United States but daytime in Europe, the iPTF's European and Israeli collaborators were the first to sift through the prioritized objects, looking for intriguing signals. After they spotted the possible supernova—a signal that had not been visible in the images taken just the night before—the European and Israeli team alerted their U.S. counterparts, including Caltech graduate student and iPTF team member Yi Cao.

Cao and his colleagues then mobilized both ground- and space-based telescopes, including NASA's Swift satellite, which observes ultraviolet (UV) light, to take a closer look at the young supernova.

"My colleagues and I spent many sleepless nights on designing our system to search for luminous ultraviolet emission from baby Type Ia supernovae," says Cao. "As you can imagine, I was fired up when I first saw a bright spot at the location of this supernova in the ultraviolet image. I knew this was likely what we had been hoping for."

UV radiation has higher energy than visible light, so it is particularly suited to observing very hot objects like supernovae (although such observations are possible only from space, because Earth's atmosphere and ozone later absorbs almost all of this incoming UV). Swift measured a pulse of UV radiation that declined initially but then rose as the supernova brightened. Because such a pulse is short-lived, it can be missed by surveys that scan the sky less frequently than does the iPTF.

This observed ultraviolet pulse is consistent with a formation scenario in which the material ejected from a supernova explosion slams into a companion star, generating a shock wave that ignites the surrounding material. In other words, the data are in agreement with the single-degenerate model.

Back in 2010, Daniel Kasen, an associate professor of astronomy and physics at UC Berkeley and Lawrence Berkeley National Laboratory, used theoretical calculations and supercomputer simulations to predict just such a pulse from supernova-companion collisions. "After I made that prediction, a lot of people tried to look for that signature," Kasen says. "This is the first time that anyone has seen it. It opens up an entirely new way to study the origins of exploding stars."

According to Kulkarni, the discovery "provides direct evidence for the existence of a companion star in a Type Ia supernova, and demonstrates that at least some Type Ia supernovae originate from the single-degenerate channel."

Although the data from supernova iPTF14atg support it being made by a single-degenerate system, other Type Ia supernovae may result from double-degenerate systems. In fact, observations in 2011 of SN2011fe, another Type Ia supernova discovered in the nearby galaxy Messier 101 by PTF (the precursor to the iPTF), appeared to rule out the single-degenerate model for that particular supernova. And that means that both theories actually may be valid, says Caltech professor of theoretical astrophysics Sterl Phinney, who was not involved in the research. "The news is that it seems that both sets of theoretical models are right, and there are two very different kinds of Type Ia supernovae."

"Both rapid discovery of supernovae in their infancy by iPTF, and rapid follow-up by the Swift satellite, were essential to unveil the companion to this exploding white dwarf. Now we have to do this again and again to determine the fractions of Type Ia supernovae akin to different origin theories," says iPTF team member Mansi Kasliwal, who will join the Caltech astronomy faculty as an assistant professor in September 2015.

The iPTF project is a scientific collaboration between Caltech; Los Alamos National Laboratory; the University of Wisconsin–Milwaukee; the Oskar Klein Centre in Sweden; the Weizmann Institute of Science in Israel; the TANGO Program of the University System of Taiwan; and the Kavli Institute for the Physics and Mathematics of the Universe in Japan. The Caltech team is funded in part by the National Science Foundation.

Frontpage Title: 
Caltech Astronomers See Supernova Collide with Companion Star
Listing Title: 
Caltech Astronomers See Supernova Collide with Companion Star
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Dedication of Advanced LIGO

The Advanced LIGO Project, a major upgrade that will increase the sensitivity of the Laser Interferometer Gravitational-wave Observatories instruments by a factor of 10 and provide a 1,000-fold increase in the number of astrophysical candidates for gravitational wave signals, was officially dedicated today in a ceremony held at the LIGO Hanford facility in Richland, Washington.

LIGO was designed and is operated by Caltech and MIT, with funding from the National Science Foundation (NSF). Advanced LIGO, also funded by the NSF, will begin its first searches for gravitational waves in the fall of this year.

The dedication ceremony featured remarks from Caltech president Thomas F. Rosenbaum, the Sonja and William Davidow Presidential Chair and professor of physics; Professor of Physics Tom Soifer (BS '68), the Kent and Joyce Kresa Leadership Chair of Caltech's Division of Physics, Mathematics and Astronomy; and NSF director France Córdova (PhD '79).

"We've spent the past seven years putting together the most sensitive gravitational-wave detector ever built. Commissioning the detectors has gone extremely well thus far, and we are looking forward to our first science run with Advanced LIGO beginning later in 2015.  This is a very exciting time for the field," says Caltech's David H. Reitze, executive director of the LIGO Project.

"Advanced LIGO represents a critically important step forward in our continuing effort to understand the extraordinary mysteries of our universe," says Córdova. "It gives scientists a highly sophisticated instrument for detecting gravitational waves, which we believe carry with them information about their dynamic origins and about the nature of gravity that cannot be obtained by conventional astronomical tools."

"This is a particularly thrilling event, marking the opening of a new window on the universe, one that will allow us to see the final cataclysmic moments in the lives of stars that would otherwise be invisible to us," says Soifer.

Predicted by Albert Einstein in 1916 as a consequence of his general theory of relativity, gravitational waves are ripples in the fabric of space and time produced by violent events in the distant universe—for example, by the collision of two black holes or by the cores of supernova explosions. Gravitational waves are emitted by accelerating masses much in the same way as radio waves are produced by accelerating charges, such as electrons in antennas. As they travel to Earth, these ripples in the space-time fabric bring with them information about their violent origins and about the nature of gravity that cannot be obtained by other astronomical tools.

Although they have not yet been detected directly, the influence of gravitational waves on a binary pulsar system (two neutron stars orbiting each other) has been measured accurately and is in excellent agreement with the predictions. Scientists therefore have great confidence that gravitational waves exist. But a direct detection will confirm Einstein's vision of the waves and allow a fascinating new window into cataclysms in the cosmos.

LIGO was originally proposed as a means of detecting these gravitational waves. Each of the 4-km-long L-shaped LIGO interferometers (one each at LIGO Hanford and at the LIGO observatory in Livingston, Louisiana) use a laser split into two beams that travel back and forth down long arms (which are beam tubes from which the air has been evacuated). The beams are used to monitor the distance between precisely configured mirrors. According to Einstein's theory, the relative distance between the mirrors will change very slightly when a gravitational wave passes by. 

The original configuration of LIGO was sensitive enough to detect a change in the lengths of the 4-km arms by a distance one-thousandth the size of a proton; this is like accurately measuring the distance from Earth to the nearest star—3 light years—to within the width of a human hair. Advanced LIGO, which will utilize the infrastructure of LIGO, will be 10 times more sensitive.

Included in the upgrade were changes in the lasers (180-watt highly stabilized systems), optics (40-kg fused-silica "test mass" mirrors suspended by fused-silica fibers), seismic isolation systems (using inertial sensing and feedback), and in how the microscopic motion (less than one billionth of one billionth of a meter) of the test masses is detected.

The change of more than a factor of 10 in sensitivity also comes with a significant increase in the sensitive frequency range. This will allow Advanced LIGO to look at the last minutes of the life of pairs of massive black holes as they spiral closer, coalesce into one larger black hole, and then vibrate much like two soap bubbles becoming one. It will also allow the instrument to pinpoint periodic signals from the many known pulsars that radiate in the range from 500 to 1,000 Hertz (frequencies that correspond to high notes on an organ).

Advanced LIGO will also be used to search for the gravitational cosmic background—allowing tests of theories about the development of the universe only 10-35 seconds after the Big Bang.

LIGO research is carried out by the LIGO Scientific Collaboration (LSC), a group of some 950 scientists at universities around the United States and in 15 other countries. The LSC network includes the LIGO interferometers and the GEO600 interferometer, located near Hannover, Germany, and theand and the LSC works jointly with the Virgo Collaboration—which designed and constructed the 3-km-long Virgo interferometer located in Cascina, Italy—to analyze data from the LIGO, GEO, and Virgo interferometers.

Several international partners including the Max Planck Institute for Gravitational Physics, the Albert Einstein Institute, the Laser Zentrum Hannover, and the Leibniz Universität Hannover in Germany; an Australian consortium of universities, led by the Australian National University and the University of Adelaide, and supported by the Australian Research Council; partners in the United Kingdom funded by the Science and Technology Facilities Council; and the University of Florida and Columbia University, provided significant contributions of equipment, labor, and expertise.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Controlling a Robotic Arm with a Patient's Intentions

Neural prosthetic devices implanted in the brain's movement center, the motor cortex, can allow patients with amputations or paralysis to control the movement of a robotic limb—one that can be either connected to or separate from the patient's own limb. However, current neuroprosthetics produce motion that is delayed and jerky—not the smooth and seemingly automatic gestures associated with natural movement. Now, by implanting neuroprosthetics in a part of the brain that controls not the movement directly but rather our intent to move, Caltech researchers have developed a way to produce more natural and fluid motions.

In a clinical trial, the Caltech team and colleagues from Keck Medicine of USC have successfully implanted just such a device in a patient with quadriplegia, giving him the ability to perform a fluid hand-shaking gesture and even play "rock, paper, scissors" using a separate robotic arm.

The results of the trial, led by principal investigator Richard Andersen, the James G. Boswell Professor of Neuroscience, and including Caltech lab members Tyson Aflalo, Spencer Kellis, Christian Klaes, Brian Lee, Ying Shi and Kelsie Pejsa, are published in the May 22 edition of the journal Science.

"When you move your arm, you really don't think about which muscles to activate and the details of the movement—such as lift the arm, extend the arm, grasp the cup, close the hand around the cup, and so on. Instead, you think about the goal of the movement. For example, 'I want to pick up that cup of water,'" Andersen says. "So in this trial, we were successfully able to decode these actual intents, by asking the subject to simply imagine the movement as a whole, rather than breaking it down into myriad components."

For example, the process of seeing a person and then shaking his hand begins with a visual signal (for example, recognizing someone you know) that is first processed in the lower visual areas of the cerebral cortex. The signal then moves up to a high-level cognitive area known as the posterior parietal cortex (PPC). Here, the initial intent to make a movement is formed. These intentions are then transmitted to the motor cortex, through the spinal cord, and on to the arms and legs where the movement is executed.

High spinal cord injuries can cause quadriplegia in some patients because movement signals cannot get from the brain to the arms and legs. As a solution, earlier neuroprosthetic implants used tiny electrodes to detect and record movement signals at their last stop before reaching the spinal cord: the motor cortex.

The recorded signal is then carried via wire bundles from the patient's brain to a computer, where it is translated into an instruction for a robotic limb. However, because the motor cortex normally controls many muscles, the signals tend to be detailed and specific. The Caltech group wanted to see if the simpler intent to shake the hand could be used to control the prosthetic limb, instead of asking the subject to concentrate on each component of the handshake—a more painstaking and less natural approach.

Andersen and his colleagues wanted to improve the versatility of movement that a neuroprosthetic can offer by recording signals from a different brain region—the PPC. "The PPC is earlier in the pathway, so signals there are more related to movement planning—what you actually intend to do—rather than the details of the movement execution," he says. "We hoped that the signals from the PPC would be easier for the patients to use, ultimately making the movement process more intuitive. Our future studies will investigate ways to combine the detailed motor cortex signals with more cognitive PPC signals to take advantage of each area's specializations."

In the clinical trial, designed to test the safety and effectiveness of this new approach, the Caltech team collaborated with surgeons at Keck Medicine of USC and the rehabilitation team at Rancho Los Amigos National Rehabilitation Center. The surgeons implanted a pair of small electrode arrays in two parts of the PPC of a quadriplegic patient. Each array contains 96 active electrodes that, in turn, each record the activity of a single neuron in the PPC. The arrays were connected by a cable to a system of computers that processed the signals, decoded the intent of the subject, and controlled output devices that included a computer cursor and a robotic arm developed by collaborators at Johns Hopkins University.

After recovering from the surgery, the patient was trained to control the computer cursor and the robotic arm with his mind. Once training was complete, the researchers saw just what they were hoping for: intuitive movement of the robotic arm.

"For me, the most exciting moment of the trial was when the participant first moved the robotic limb with his thoughts. He had been paralyzed for over 10 years, and this was the first time since his injury that he could move a limb and reach out to someone. It was a thrilling moment for all of us," Andersen says.

"It was a big surprise that the patient was able to control the limb on day one—the very first day he tried," he adds. "This attests to how intuitive the control is when using PPC activity."

The patient, Erik G. Sorto, was also thrilled with the quick results: "I was surprised at how easy it was," he says. "I remember just having this out-of-body experience, and I wanted to just run around and high-five everybody."

Over time, Sorto continued to refine his control of his robotic arm, thus providing the researchers with more information about how the PPC works. For example, "we learned that if he thought, 'I should move my hand over toward to the object in a certain way'—trying to control the limb—that didn't work," Andersen says. "The thought actually needed to be more cognitive. But if he just thought, 'I want to grasp the object,' it was much easier. And that is exactly what we would expect from this area of the brain."

This better understanding of the PPC will help the researchers improve neuroprosthetic devices of the future, Andersen says. "What we have here is a unique window into the workings of a complex high-level brain area as we work collaboratively with our subject to perfect his skill in controlling external devices."

"The primary mission of the USC Neurorestoration Center is to take advantage of resources from our clinical programs to create unique opportunities to translate scientific discoveries, such as those of the Andersen Lab at Caltech, to human patients, ultimately turning transformative discoveries into effective therapies," says center director Charles Y. Liu, professor of neurological surgery, neurology, and biomedical engineering at USC, who led the surgical implant procedure and the USC/Rancho Los Amigos team in the collaboration.

"In taking care of patients with neurological injuries and diseases—and knowing the significant limitations of current treatment strategies—it is clear that completely new approaches are necessary to restore function to paralyzed patients. Direct brain control of robots and computers has the potential to dramatically change the lives of many people," Liu adds.

Dr. Mindy Aisen, the chief medical officer at Rancho Los Amigos who led the study's rehabilitation team, says that advancements in prosthetics like these hold promise for the future of patient rehabilitation. "We at Rancho are dedicated to advancing rehabilitation through new assistive technologies, such as robotics and brain-machine interfaces. We have created a unique environment that can seamlessly bring together rehabilitation, medicine, and science as exemplified in this study," she says.

Although tasks like shaking hands and playing "rock, paper, scissors" are important to demonstrate the capability of these devices, the hope is that neuroprosthetics will eventually enable patients to perform more practical tasks that will allow them to regain some of their independence.

"This study has been very meaningful to me. As much as the project needed me, I needed the project. The project has made a huge difference in my life. It gives me great pleasure to be part of the solution for improving paralyzed patients' lives," Sorto says. "I joke around with the guys that I want to be able to drink my own beer—to be able to take a drink at my own pace, when I want to take a sip out of my beer and to not have to ask somebody to give it to me. I really miss that independence. I think that if it was safe enough, I would really enjoy grooming myself—shaving, brushing my own teeth. That would be fantastic." 

To that end, Andersen and his colleagues are already working on a strategy that could enable patients to perform these finer motor skills. The key is to be able to provide particular types of sensory feedback from the robotic arm to the brain.

Although Sorto's implant allowed him to control larger movements with visual feedback, "to really do fine dexterous control, you also need feedback from touch," Andersen says. "Without it, it's like going to the dentist and having your mouth numbed. It's very hard to speak without somatosensory feedback." The newest devices under development by Andersen and his colleagues feature a mechanism to relay signals from the robotic arm back into the part of the brain that gives the perception of touch.

"The reason we are developing these devices is that normally a quadriplegic patient couldn't, say, pick up a glass of water to sip it, or feed themselves. They can't even do anything if their nose itches. Seemingly trivial things like this are very frustrating for the patients," Andersen says. "This trial is an important step toward improving their quality of life."

The results of the trial were published in a paper titled, "Decoding Motor Imagery from the Posterior Parietal Cortex of a Tetraplegic Human." The implanted device and signal processors used in the Caltech-led clinical trial were the NeuroPort Array and NeuroPort Bio-potential Signal Processors developed by Blackrock Microsystems in Salt Lake City, Utah. The robotic arm used in the trial was the Modular Prosthetic Limb, developed at the Applied Physics Laboratory at Johns Hopkins. Sorto was recruited to the trial by collaborators at Rancho Los Amigos National Rehabilitation Center and at Keck Medicine of USC. This trial was funded by National Institutes of Health, the Boswell Foundation, the Department of Defense, and the USC Neurorestoration Center.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Do Fruit Flies Have Emotions?

A fruit fly starts buzzing around food at a picnic, so you wave your hand over the insect and shoo it away. But when the insect flees the scene, is it doing so because it is actually afraid? Using fruit flies to study the basic components of emotion, a new Caltech study reports that a fly's response to a shadowy overhead stimulus might be analogous to a negative emotional state such as fear—a finding that could one day help us understand the neural circuitry involved in human emotion.

The study, which was done in the laboratory of David Anderson, Seymour Benzer Professor of Biology and an investigator with the Howard Hughes Medical Institute, was published online May 14 in the journal Current Biology.

Insects are an important model for the study of emotion; although mice are closer to humans on the evolutionary family tree, the fruit fly has a much simpler neurological system that is easier to study. However, studying emotions in insects or any other animal can also be tricky. Because researchers know the experience of human emotion, they might anthropomorphize those of an insect—just as you might assume that the shooed-away fly left your plate because it was afraid of your hand. But there are several problems with such an assumption, says postdoctoral scholar William T. Gibson, first author of the paper.

"There are two difficulties with taking your own experiences and then saying that maybe these are happening in a fly. First, a fly's brain is very different from yours, and second, a fly's evolutionary history is so different from yours that even if you could prove beyond any doubt that flies have emotions, those emotions probably wouldn't be the same ones that you have," he says. "For these reasons, in our study, we wanted to take an objective approach."

Anderson and Gibson and their colleagues did this by deconstructing the idea of an emotion into basic building blocks—so-called emotion primitives, a concept previously developed by Anderson and Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology.

"There has been ongoing debate for decades about what 'emotion' means, and there is no generally accepted definition. In an article that Ralph Adolphs and I recently wrote, we put forth the view that emotions are a type of internal brain state with certain general properties that can exist independently of subjective, conscious feelings, which can only be studied in humans," Anderson says. "That means we can study such brain states in animal models like flies or mice without worrying about whether they have 'feelings' or not. We use the behaviors that express those states as a readout."

Gibson explains by analogy that emotions can be broken down into these emotion primitives much as a secondary color, such as orange, can be separated into two primary colors, yellow and red. "And if we can show that fruit flies display all of these separate but necessary primitives, we then may be able to make the argument that they also have an emotion, like fear."

The emotion primitives analyzed in the fly study can be understood in the context of a stimulus associated with human fear: the sound of a gunshot. If you hear a gun fire, the sound may trigger a negative feeling. This feeling, a primitive called valence, will probably cause you to behave differently for several minutes afterward. This is a primitive called persistence. Repeated exposure to the stimulus should also produce a greater emotional response—a primitive called scalability; for example, the sound of 10 gunshots would make you more afraid than the sound of one shot.

Gibson says that another primitive of fear is that it is generalized to different contexts, meaning that if you were eating lunch or were otherwise occupied when the gun fired, the fear would take over, distracting you from your lunch. Trans-situationality is another primitive that could cause you to produce the same fearful reaction in response to an unrelated stimulus—such as the sound of a car backfiring.

The researchers chose to study these five primitives by observing the insects in the presence of a fear-inducing stimulus. Because defensive behavioral responses to overhead visual threats are common in many animals, the researchers created an apparatus that would pass a dark paddle over the flies' habitat. The flies' movements were then tracked using a software program created in collaboration with Pietro Perona, the Allen E. Puckett Professor of Electrical Engineering.

The researchers analyzed the flies' responses to the stimulus and found that the insects displayed all of these emotion primitives. For example, responses were scalable: when the paddle passed overhead, the flies would either freeze, or jump away from the stimulus, or enter a state of elevated arousal, and each response increased with the number of times the stimulus was delivered. And when hungry flies were gathered around food, the stimulus would cause them to leave the food for several seconds and run around the arena until their state of elevated arousal decayed and they returned to the food—exhibiting the primitives of context generalization and persistence.

"These experiments provide objective evidence that visual stimuli designed to mimic an overhead predator can induce a persistent and scalable internal state of defensive arousal in flies, which can influence their subsequent behavior for minutes after the threat has passed," Anderson says. "For us, that's a big step beyond just casually intuiting that a fly fleeing a visual threat must be 'afraid,' based on our anthropomorphic assumptions. It suggests that the flies' response to the threat is richer and more complicated than a robotic-like avoidance reflex."

In the future, the researchers say that they plan to combine the new technique with genetically based techniques and imaging of brain activity to identify the neural circuitry that underlies these defensive behaviors. Their end goal is to identify specific populations of neurons in the fruit fly brain that are necessary for emotion primitives—and whether these functions are conserved in higher organisms, such as mice or even humans.

Although the presence of these primitives suggests that the flies might be reacting to the stimulus based on some kind of emotion, the researchers are quick to point out that this new information does not prove—nor did it set out to establish—that flies can experience fear, or happiness, or anger, or any other feelings.

"Our work can get at questions about mechanism and questions about the functional properties of emotion states, but we cannot get at the question of whether or not flies have feelings," Gibson says.

The study, titled "Behavioral Responses to a Repetitive Stimulus Express a Persistent State of Defensive Arousal in Drosophila," was published in the journal Current Biology. In addition to Gibson, Anderson, and Perona, Caltech coauthors include graduate student Carlos Gonzalez, undergraduate Rebecca Du, former research assistants Conchi Fernandez and Panna Felsen (BS '09, MS '10), and former postdoctoral scholar Michael Maire. Coauthors Lakshminarayanan Ramasamy and Tanya Tabachnik are from the Janelia Research Campus of the Howard Hughes Medical Institute (HHMI). The work was funded by the National Institutes of Health, HHMI, and the Gordon and Betty Moore Foundation.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Powerful New Radio Telescope Array Searches the Entire Sky 24/7

A new radio telescope array developed by a consortium led by Caltech and now operating at the Owens Valley Radio Observatory has the ability to image simultaneously the entire sky at radio wavelengths with unmatched speed, helping astronomers to search for objects and phenomena that pulse, flicker, flare, or explode.

The new tool, the Owens Valley Long Wavelength Array (OV-LWA), is already producing unprecedented videos of the radio sky. Astronomers hope that it will help them piece together a more complete picture of the early universe and learn about extrasolar space weather—the interaction between nearby stars and their orbiting planets.

The consortium includes astronomers from Caltech, JPL, Harvard University, the University of New Mexico, Virginia Tech, and the Naval Research Laboratory.

"Our new telescope lets us see the entire sky all at once, and we can image everything instantaneously," says Gregg Hallinan, an assistant professor of astronomy at Caltech and OV-LWA's principal investigator.

Combining the observing power of more than 250 antennas spread out over a desert area equivalent to about 450 football fields, the OV-LWA is uniquely sensitive to faint variable radio signals such as those produced by pulsars, solar flares, and auroras on distant planets. A single radio antenna would have to be a hundred meters wide to achieve the same sensitivity (the giant radio telescope at Arecibo Observatory in Puerto Rico is 305 meters in diameter). However, a telescope's field of view is governed by the size of its dish, and such an enormous instrument still would only see a tiny fraction of the entire sky.

"Our technique delivers the best of both worlds, offering good sensitivity and an enormous field of view," says Hallinan.

Operating at full speed, the new array produces 25 terabytes of data every day, making it one of the most data-intensive telescopes in the world. For comparative purposes, it would take more than 5,000 DVDs to store just one day's worth of the array's data. A supercomputer developed by a group led by Lincoln Greenhill of Harvard University for the NSF-funded Large-Aperture Experiment to Detect the Dark Ages (LEDA) delivers these data. It uses graphics processing units similar to those used in modern computer games to combine signals from all of the antennas in real time. These combined signals are then sent to a second computer cluster, the All-Sky Transient Monitor (ASTM) at Caltech and JPL, which produces all-sky images in real-time.

Hallinan says that the OV-LWA holds great promise for cosmological studies and may allow astronomers to watch the early universe as it evolved over time. Scientists might then be able to learn how and when the universe's first stars, galaxies, and black holes formed. But the formative period during which these events occurred is shrouded in a fog of hydrogen that is opaque to most radiation. Even the most powerful optical and infrared telescopes cannot peer through that fog. By observing the sky at radio frequencies, however, astronomers may be able to detect weak radio signals from the time of the births of those first stars and galaxies.

"The biggest challenge is that this weak radiation from the early universe is obscured by the radio emission from our own galaxy, which is about a million times brighter than the signal itself, so you have to have very carefully measured data to see it," says Hallinan. "That's one of the primary goals of our collaboration—to try to get the first statistical measure of that weak signal from our cosmic dawn."

If they are able to detect that signal, the researchers could be able to learn about the formation of the first stars and galaxies, their evolution, and how they eventually ionized the surrounding intergalactic medium, to give us the universe we observe today. "This new field offers the opportunity to see the universe evolve, in a cosmological movie of sorts," Hallinan says.

But Hallinan is most excited about using the array to study space weather in nearby stellar systems similar to our own. Our own sun occasionally releases bursts of magnetic energy from its atmosphere, shooting X-rays and other forms of radiation outward in large flares. Sometimes these flares are accompanied by shock waves called coronal mass ejections, which send particles and magnetic fields toward Earth and the other planets. Light displays, or auroras, are produced when those particles interact with atoms in a planet's atmosphere. These space weather events also occur on other stars, and Hallinan hopes to use the OV-LWA to study them.

"We want to detect coronal mass ejections on other stars with our array and then use other telescopes to image them," he says. "We're trying to learn about this kind of event on stars other than the sun and show that there are auroras caused by these events on planets outside our solar system."

The majority of stars in our local corner of the Milky Way are so-called M dwarfs, stars that are much smaller than our own sun and yet potentially more magnetically active. Thus far, surveys of exoplanets suggest that most such M dwarfs harbor small rocky planets. "That means it is very likely that the nearest habitable planet is orbiting an M dwarf," Hallinan says. "However, the possibility of a higher degree of activity, with extreme flaring and intense coronal mass ejections, may have an impact on the atmosphere of such a planet and affect habitability."

A coronal mass ejection from an M dwarf would shower charged particles on the atmosphere and magnetic field of an orbiting planet, potentially leading to aurorae and periodic radio bursts. Astronomers could determine the strength of the planet's magnetic field by measuring the intensity and duration of such an event. And since magnetic fields may protect planets from the activity of their host stars, many such measurements would shed light on the potential habitability of these planets.

For decades, astronomers have been trying to detect radio bursts associated with extrasolar space weather. This is challenging for two reasons. First, the radio emission pulses as the planet rotates, flashing like a lighthouse beacon, so astronomers have to be looking at just the right time to catch the flash. Second, the radio emission may brighten significantly as the velocity of a star's stellar wind increases during a coronal mass ejection.

"You need to be observing at that exact moment when the beacon is pointed in our direction and the star's stellar wind has picked up. You might need to monitor that planet for a decade to get that one event where it is really bright," Hallinan says. "So you need to be able to not just observe at random intervals but to monitor all these planets continuously. Our new array allows us to do that."

 The OV-LWA was initiated through the support of Deborah Castleman (MS '86) and Harold Rosen (MS '48; PhD '51)

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Imaging the Entire Radio Sky 24/7
Listing Title: 
Imaging the Entire Radio Sky 24/7
Writer: 
Exclude from News Hub: 
No
Short Title: 
Powerful New Radio Telescope Array
News Type: 
Research News

New Thin, Flat Lenses Focus Light as Sharply as Curved Lenses

Lenses appear in all sorts of everyday objects, from prescription eyeglasses to cell-phone cameras. Typically, lenses rely on a curved shape to bend and focus light. But in the tight spaces inside consumer electronics and fiber-optic systems, these rounded lenses can take up a lot of room. Over the last few years, scientists have started crafting tiny flat lenses that are ideal for such close quarters. To date, however, thin microlenses have failed to transmit and focus light as efficiently as their bigger, curved counterparts.

Caltech engineers have created flat microlenses with performance on a par with conventional, curved lenses. These lenses can be manufactured using industry-standard techniques for making computer chips, setting the stage for their incorporation into electronics such as cameras and microscopes, as well as in novel devices.

"The lenses we use today are bulky," says Amir Arbabi, a senior researcher in the Division of Engineering and Applied Science, and lead author of the paper. "The structure we have chosen for these flat lenses can open up new areas of application that were not available before."

The research, led by Andrei Faraon (BS '04), assistant professor of applied physics and material science, appears in the May 7 issue of Nature Communications.

The new lens type is known as a high-contrast transmitarray. Made of silicon, the lens is just a millionth of a meter thick, or about a hundredth of the diameter of a human hair, and it is studded with silicon "posts" of varying sizes. When imaged under a scanning electron microscope, the lens resembles a forest cleared for timber, with only stumps (the posts) remaining. Depending on their heights and thicknesses, the posts focus different colors, or wavelengths, of light.

A lens focuses light or forms an image by delaying for varying amounts of time the passage of light through different parts of the lens. In curved glass lenses, light takes longer to travel through the thicker parts of the lens than through the thinner parts. On the flat lens, these delays are achieved by the silicon posts, which trap and delay the light for an amount of time that depends on the diameter of the posts. With careful placement of these differently sized posts on the lens, the researchers can guide incident light as it passes through the lens to form a curved wavefront, resulting in a tightly focused spot.

The Caltech researchers found that their flat lenses focus as much as 82 percent of infrared light passing through them. By comparison, previous studies have found that metallic flat lenses have efficiencies of only around a few percent, in part because their materials absorb some incident light.

Although curved glass lenses can focus nearly 100 percent of the light that reaches them, they usually require sophisticated designs with nonspherical surfaces that can be difficult to polish. On the other hand, the design of the flat lenses can be modified depending upon the exact application for which the lenses are needed, simply by changing the pattern of the silicon nanoposts. This flexibility makes them attractive for commercial and industrial use, the researchers say. "You get exceptional freedom to design lenses for different functionalities," says Arbabi.

A limitation of flat lenses is that each lens can only focus a narrow set of wavelengths, representing individual colors of the spectrum. These monochromatic lenses could find application in devices such as a night-vision camera, which sees in infrared over a narrow wavelength range. More broadly, they could be used in any optical device involving lasers, as lasers emit only a single color of light.

Multiple monochromatic lenses could be used to deliver multicolor images, much as television and computer displays employ combinations of the colors red, green, and blue to produce a rainbow of hues. Because the microlenses are so small, integrating them in optical systems would take up little space compared to the curved lenses now utilized in cameras or microscopes.

Although the lenses currently are expensive to manufacture, it should be possible to produce thousands at once using photolithography or nanoimprint lithography techniques, the researches say. In these common, high-throughput manufacturing techniques, a stamp presses into a polymer, leaving behind a desired pattern that is then transferred into silicon through dry etching of silicon in a plasma.

"For consumer applications, the current price point of flat lenses is not good, but the performance is," says Faraon. "Depending on how many of lenses you are making, the price can drop down rapidly."

The paper is entitled "Subwavelength-thick lenses with high numerical apertures and large efficiency based on high-contrast transmitarrays." In addition to Arbabi and Faraon, other Caltech coauthors include graduate student Yu Horie, senior Alexander Ball, and Mahmood Bagheri, a microdevices engineer at JPL. The work was supported by the Caltech/JPL President's and Director's Fund and the Defense Advanced Research Projects Agency. Alexander Ball was supported by a Summer Undergraduate Research Fellowship at Caltech. The device nanofabrication was performed in the Kavli Nanoscience Institute at Caltech.

Frontpage Title: 
New Thin, Flat Lenses
Listing Title: 
New Thin, Flat Lenses
Writer: 
Exclude from News Hub: 
No
Short Title: 
New Thin, Flat Lenses
News Type: 
Research News

Lopsided Star Explosion Holds the Key to Other Supernova Mysteries

New observations of a recently exploded star are confirming supercomputer model predictions made at Caltech that the deaths of stellar giants are lopsided affairs in which debris and the stars' cores hurtle off in opposite directions.

While observing the remnant of supernova (SN) 1987A, NASA's Nuclear Spectroscopic Telescope Array, or NuSTAR, recently detected the unique energy signature of titanium-44, a radioactive version of titanium that is produced during the early stages of a particular type of star explosion, called a Type II, or core-collapse supernova.

"Titanium-44 is unstable. When it decays and turns into calcium, it emits gamma rays at a specific energy, which NuSTAR can detect," says Fiona Harrison, the Benjamin M. Rosen Professor of Physics at Caltech, and NuSTAR's principal investigator.

By analyzing direction-dependent frequency changes—or Doppler shifts—of energy from titanium-44, Harrison and her team discovered that most of the material is moving away from NuSTAR. The finding, detailed in the May 8 issue of the journal Science, is the best proof yet that the mechanism that triggers Type II supernovae is inherently lopsided.

NuSTAR recently created detailed titanium-44 maps of another supernova remnant, called Cassiopeia A, and there too it found signs of an asymmetrical explosion, although the evidence in this case is not as definitive as with 1987A.

Supernova 1987A was first detected in 1987, when light from the explosion of a blue supergiant star located 168,000 light-years away reached Earth. SN 1987A was an important event for astronomers. Not only was it the closest supernova to be detected in hundreds of years, it marked the first time that neutrinos had been detected from an astronomical source other than our sun.

These nearly massless subatomic particles had been predicted to be produced in large quantities during Type II explosions, so their detection during 1987A supported some of the fundamental theories about the inner workings of supernovae.

With the latest NuSTAR observations, 1987A is once again proving to be a useful natural laboratory for studying the mysteries of stellar death. For many years, supercomputer simulations performed at Caltech and elsewhere predicted that the cores of pending Type II supernovae change shape just before exploding, transforming from a perfectly symmetric sphere into a wobbly mass made up of turbulent plumes of extremely hot gas. In fact, models that assumed a perfectly spherical core just fizzled out.

"If you make everything just spherical, the core doesn't explode. It turns out you need asymmetries to make the star explode," Harrison says.

According to the simulations, the shape change is driven by turbulence generated by neutrinos that are absorbed within the core. "This turbulence helps push out a powerful shock wave and launch the explosion," says Christian Ott, a professor of theoretical physics at Caltech who was not involved in the NuSTAR observations.

Ott's team uses supercomputers to run three-dimensional simulations of core-collapse supernovae. Each simulation generates hundreds of terabytes of results—for comparison, the entire print collection of the U.S. Library of Congress is equal to about 10 terabytes—but represents only a few tenths of a second during a supernova explosion.

A better understanding of the asymmetrical nature of Type II supernovae, Ott says, could help solve one of the biggest mysteries surrounding stellar deaths: why some supernovae collapse into neutron stars and others into a black hole to form a space-time singularity. It could be that the high degree of asymmetry in some supernovae produces a dual effect: the star explodes in one direction, while the remainder of the star continues to collapse in all other directions.

"In this way, an explosion could happen, but eventually leave behind a black hole and not a neutron star," Ott says.

The NuSTAR findings also increase the chances that Advanced LIGO—the upgraded version of the Laser Interferometer Gravitational-wave Observatory, which will begin to take data later this year—will be successful in detecting gravitational waves from supernovae. Gravitational waves are ripples that propagate through the fabric of space-time. According to theory, Type II supernovae should emit gravitational waves, but only if the explosions are asymmetrical.

Harrison and Ott have plans to combine the observational and theoretical studies of supernova that until now have been occurring along parallel tracks at Caltech, using the NuSTAR observations to refine supercomputer simulations of supernova explosions.

"The two of us are going to work together to try to get the models to more accurately predict what we're seeing in 1987A and Cassiopeia A," Harrison says.

Additional Caltech coauthors of the paper, entitled "44Ti gamma-ray emission lines from SN1987A reveal an asymmetric explosion," are Hiromasa Miyasaka, Brian Grefenstette, Kristin Madsen, Peter Mao, and Vikram Rana. The research was supported by funding from NASA, the French National Center for Space Studies (CNES), the Japan Society for the Promotion of Science, and the Technical University of Denmark.

This article also references the paper "Magnetorotational Core-collapse Supernovae in Three Dimensions," which appeared in the April 20, 2014, issue of Astrophysical Journal Letters.

Frontpage Title: 
NuSTAR Observations Hold Key to Supernova Mysteries
Listing Title: 
NuSTAR Observations Hold Key to Supernova Mysteries
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

“Freezing a Bullet” to Find Clues to Ribosome Assembly Process

Researchers Figure Out How Protein-Synthesizing Cellular Machines Are Built in Stepwise Fashion

Ribosomes are vital to the function of all living cells. Using the genetic information from RNA, these large molecular complexes build proteins by linking amino acids together in a specific order. Scientists have known for more than half a century that these cellular machines are themselves made up of about 80 different proteins, called ribosomal proteins, along with several RNA molecules and that these components are added in a particular sequence to construct new ribosomes, but no one has known the mechanism that controls that process.

Now researchers from Caltech and Heidelberg University have combined their expertise to track a ribosomal protein in yeast all the way from its synthesis in the cytoplasm, the cellular compartment surrounding the nucleus of a cell, to its incorporation into a developing ribosome within the nucleus. In so doing, they have identified a new chaperone protein, known as Acl4, that ushers a specific ribosomal protein through the construction process and a new regulatory mechanism that likely occurs in all eukaryotic cells.

The results, described in a paper that appears online in the journal Molecular Cell, also suggest an approach for making new antifungal agents.

The work was completed in the labs of André Hoelz, assistant professor of chemistry at Caltech, and Ed Hurt, director of the Heidelberg University Biochemistry Center (BZH).

 

 

"We now understand how this chaperone, Acl4, works with its ribosomal protein with great precision," says Hoelz. "Seeing that is kind of like being able to freeze a bullet whizzing through the air and turn it around and analyze it in all dimensions to see exactly what it looks like."

That is because the entire ribosome assembly process—including the synthesis of new ribosomal proteins by ribosomes in the cytoplasm, the transfer of those proteins into the nucleus, their incorporation into a developing ribosome, and the completed ribosome's export back out of the nucleus into the cytoplasm—happens in the tens of minutes timescale. So quickly that more than a million ribosomes are produced per day in mammalian cells to allow for turnover and cell division. Therefore, being able to follow a ribosomal protein through that process is not a simple task.

Hurt and his team in Germany have developed a new technique to capture the state of a ribosomal protein shortly after it is synthesized. When they "stopped" this particular flying bullet, an important ribosomal protein known as L4, they found that its was bound to Acl4.

Hoelz's group at Caltech then used X-ray crystallography to obtain an atomic snapshot of Acl4 and further biochemical interaction studies to establish how Acl4 recognizes and protects L4. They found that Acl4 attaches to L4 (having a high affinity for only that ribosomal protein) as it emerges from the ribosome that produced it, akin to a hand gripping a baseball. Thereby the chaperone ensures that the ribosomal protein is protected from machinery in the cell that would otherwise destroy it and ushers the L4 molecule through the sole gateway between the nucleus and cytoplasm, called the nuclear pore complex, to the site in the nucleus where new ribosomes are constructed.

"The ribosomal protein together with its chaperone basically travel through the nucleus and screen their surroundings until they find an assembling ribosome that is at exactly the right stage for the ribosomal protein to be incorporated," explains Ferdinand Huber, a graduate student in Hoelz's group and one of the first authors on the paper. "Once found, the chaperone lets the ribosomal protein go and gets recycled to go pick up another protein."

The researchers say that Acl4 is just one example from a whole family of chaperone proteins that likely work in this same fashion.

Hoelz adds that if this process does not work properly, ribosomes and proteins cannot be made. Some diseases (including aggressive leukemia subtypes) are associated with malfunctions in this process.

"It is likely that human cells also contain a dedicated assembly chaperone for L4. However, we are certain that it has a distinct atomic structure, which might allow us to develop new antifungal agents," Hoelz says. "By preventing the chaperone from interacting with its partner, you could keep the cell from making new ribosomes. You could potentially weaken the organism to the point where the immune system could then clear the infection. This is a completely new approach."

Co-first authors on the paper, "Coordinated Ribosomal L4 Protein Assembly into the Pre-Ribosome Is Regulated by Its Eukaryote-Specific Extension," are Huber and Philipp Stelter of Heidelberg University. Additional authors include Ruth Kunze and Dirk Flemming also from Heidelberg University. The work was supported by the Boehringer Ingelheim Fonds, the V Foundation for Cancer Research, the Edward Mallinckrodt, Jr. Foundation, the Sidney Kimmel Foundation for Cancer Research, and the German Research Foundation (DFG).

 

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
Short Title: 
Figuring Out How Ribosomes Are Made
News Type: 
Research News

Switching On One-Shot Learning in the Brain

Caltech researchers find the brain regions responsible for jumping to conclusions

Most of the time, we learn only gradually, incrementally building connections between actions or events and outcomes. But there are exceptions—every once in a while, something happens and we immediately learn to associate that stimulus with a result. For example, maybe you have had bad service at a store once and sworn that you will never shop there again.

This type of one-shot learning is more than handy when it comes to survival—think, of an animal quickly learning to avoid a type of poisonous berry. In that case, jumping to the conclusion that the fruit was to blame for a bout of illness might help the animal steer clear of the same danger in the future. On the other hand, quickly drawing connections despite a lack of evidence can also lead to misattributions and superstitions; for example, you might blame a new food you tried for an illness when in fact it was harmless, or you might begin to believe that if you do not eat your usual meal, you will get sick.

Scientists have long suspected that one-shot learning involves a different brain system than gradual learning, but could not explain what triggers this rapid learning or how the brain decides which mode to use at any one time.

Now Caltech scientists have discovered that uncertainty in terms of the causal relationship—whether an outcome is actually caused by a particular stimulus—is the main factor in determining whether or not rapid learning occurs. They say that the more uncertainty there is about the causal relationship, the more likely it is that one-shot learning will take place. When that uncertainty is high, they suggest, you need to be more focused in order to learn the relationship between stimulus and outcome.

The researchers have also identified a part of the prefrontal cortex—the large brain area located immediately behind the forehead that is associated with complex cognitive activities—that appears to evaluate such causal uncertainty and then activate one-shot learning when needed.

The findings, described in the April 28 issue of the journal PLOS Biology, could lead to new approaches for helping people learn more efficiently. The work also suggests that an inability to properly attribute cause and effect might lie at the heart of some psychiatric disorders that involve delusional thinking, such as schizophrenia.

"Many have assumed that the novelty of a stimulus would be the main factor driving one-shot learning, but our computational model showed that causal uncertainty was more important," says Sang Wan Lee, a postdoctoral scholar in neuroscience at Caltech and lead author of the new paper. "If you are uncertain, or lack evidence, about whether a particular outcome was caused by a preceding event, you are more likely to quickly associate them together."

The researchers used a simple behavioral task paired with brain imaging to determine where in the brain this causal processing takes place. Based on the results, it appears that the ventrolateral prefrontal cortex (VLPFC) is involved in the processing and then couples with the hippocampus to switch on one-shot learning, as needed.

Indeed, a switch is an appropriate metaphor, says Shinsuke Shimojo, Caltech's Gertrude Baltimore Professor of Experimental Psychology. Since the hippocampus is known to be involved in so-called episodic memory, in which the brain quickly links a particular context with an event, the researchers hypothesized that this brain region might play a role in one-shot learning. But they were surprised to find that the coupling between the VLPFC and the hippocampus was either all or nothing. "Like a light switch, one-shot learning is either on, or it's off," says Shimojo.

In the behavioral study, 47 participants completed a simple causal-inference task; 20 of those participants completed the study in the Caltech Brain Imaging Center, where their brains were monitored using functional Magnetic Resonance Imaging. The task consisted of multiple trials. During each trial, participants were shown a series of five images one at a time on a computer screen. Over the course of the task, some images appeared multiple times, while others appeared only once or twice. After every fifth image, either a positive or negative monetary outcome was displayed. Following a number of trials, participants were asked to rate how strongly they thought each image and outcome were linked. As the task proceeded, participants gradually learned to associate some of the images with particular outcomes. One-shot learning was apparent in cases where participants made an association between an image and an outcome after a single pairing.

The researchers hypothesize that the VLPFC acts as a controller mediating the one-shot learning process. They caution, however, that they have not yet proven that the brain region actually controls the process in that way. To prove that, they will need to conduct additional studies that will involve modifying the VLPFC's activity with brain stimulation and seeing how that directly affects behavior.

Still, the researchers are intrigued by the fact that the VLPFC is very close to another part of the ventrolateral prefrontal cortex that they previously found to be involved in helping the brain to switch between two other forms of learning—habitual and goal-directed learning, which involve routine behavior and more carefully considered actions, respectively. "Now we might cautiously speculate that a significant general function of the ventrolateral prefrontal cortex is to act as a leader, telling other parts of the brain involved in different types of behavioral functions when they should get involved and when they should not get involved in controlling our behavior," says coauthor John O'Doherty, professor of psychology and director of the Caltech Brain Imaging Center.

The work, "Neural Computations Mediating One-Shot Learning in the Human Brain," was supported by the National Institutes of Health, the Gordon and Betty Moore Foundation, the Japan Science and Technology Agency–CREST, and the Caltech-Tamagawa global Center of Excellence.

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Switching on One-Shot Learning in the Brain
Listing Title: 
Switching on One-Shot Learning in the Brain
Writer: 
Exclude from News Hub: 
No
Short Title: 
One-Shot Learning in the Brain
News Type: 
Research News

Tracking Photosynthesis from Space

Watching plants perform photosynthesis from space sounds like a futuristic proposal, but a new application of data from NASA's Orbiting Carbon Observatory-2 (OCO-2) satellite may enable scientists to do just that. The new technique, which allows researchers to analyze plant productivity from far above Earth, will provide a clearer picture of the global carbon cycle and may one day help researchers determine the best regional farming practices and even spot early signs of drought.

When plants are alive and healthy, they engage in photosynthesis, absorbing sunlight and carbon dioxide to produce food for the plant, and generating oxygen as a by-product. But photosynthesis does more than keep plants alive. On a global scale, the process takes up some of the man-made emissions of atmospheric carbon dioxide—a greenhouse gas that traps the sun's heat down on Earth—meaning that plants also have an important role in mitigating climate change.

To perform photosynthesis, the chlorophyll in leaves absorbs sunlight—most of which is used to create food for the plants or is lost as heat. However, a small fraction of that absorbed light is reemitted as near-infrared light. We cannot see in the near-infrared portion of the spectrum with the naked eye, but if we could, this reemitted light would make the plants appear to glow—a property called solar induced fluorescence (SIF). Because this reemitted light is only produced when the chlorophyll in plants is also absorbing sunlight for photosynthesis, SIF can be used as a way to determine a plant's photosynthetic activity and productivity.

"The intensity of the SIF appears to be very correlated with the total productivity of the plant," says JPL scientist Christian Frankenberg, who is lead for the SIF product and will join the Caltech faculty in September as an associate professor of environmental science and engineering in the Division of Geological and Planetary Sciences.

Usually, when researchers try to estimate photosynthetic activity from satellites, they utilize a measure called the greenness index, which uses reflections in the near-infrared spectrum of light to determine the amount of chlorophyll in the plant. However, this is not a direct measurement of plant productivity; a plant that contains chlorophyll is not necessarily undergoing photosynthesis. "For example," Frankenberg says, "evergreen trees are green in the winter even when they are dormant."

He adds, "When a plant starts to undergo stress situations, like in California during a summer day when it's getting very hot and dry, the plants still have chlorophyll"—chlorophyll that would still appear to be active in the greenness index—"but they usually close the tiny pores in their leaves to reduce water loss, and that time of stress is also when SIF is reduced. So photosynthesis is being very strongly reduced at the same time that the fluorescence signal is also getting weaker, albeit at a smaller rate."

The Caltech and JPL team, as well as colleagues from NASA Goddard, discovered that they could measure SIF from orbit using spectrometers—standard instruments that can detect light intensity—that are already on board satellites like Japan's Greenhouse Gases Observing Satellite (GOSAT) and NASA's OCO-2.

In 2014, using this new technique with data from GOSAT and the European Global Ozone Monitoring Experiment–2 satellite, the researchers scoured the globe for the most productive plants and determined that the U.S. "Corn Belt"—the farming region stretching from Ohio to Nebraska—is the most photosynthetically active place on the planet. Although it stands to reason that a cornfield during growing season would be actively undergoing photosynthesis, the high-resolution measurements from a satellite enabled global comparison to other plant-heavy regions—such as tropical rainforests.

"Before, when people used the greenness index to represent active photosynthesis, they had trouble determining the productivity of very dense plant areas, such as forests or cornfields. With enough green plant material in the field of view, these greenness indexes can saturate; they reach a maximum value they can't exceed," Frankenberg says. Because of the sensitivity of the SIF measurements, researchers can now compare the true productivity of fields from different regions without this saturation—information that could potentially be used to compare the efficiency of farming practices around the world.

Now that OCO-2 is online and producing data, Frankenberg says that it is capable of achieving higher resolution than the preliminary experiments with GOSAT. Therefore, OCO-2 will be able to provide an even clearer picture of plant productivity worldwide. However, to get more specific information about how plants influence the global carbon cycle, an evenly distributed ground-based network of spectrometers will be needed. Such a network—located down among the plants rather than miles above—will provide more information about regional uptake of carbon dioxide via photosynthesis and the mechanistic link between SIF and actual carbon exchange.

One existing network, called FLUXNET, uses ground-based towers to measure the exchange of carbon dioxide, or carbon flux, between the land and the atmosphere from towers at more than 600 locations worldwide. However, the towers only measure the exchange of carbon dioxide and are unable to directly observe the activities of the biosphere that drive this exchange.

The new ground-based measurements will ideally take place at existing FLUXNET sites, but they will be performed with a small set of high-resolution spectrometers—similar to the kind that OCO-2 uses—to allow the researchers to use the same measurement principles they developed for space. The revamped ground network was initially proposed in a 2012 workshop at the Keck Institute for Space Studies and is expected to go online sometime in the next two years.

In the future, a clear picture of global plant productivity could influence a range of decisions relevant to farmers, commodity traders, and policymakers. "Right now, the SIF data we can gather from space is too coarse of a picture to be really helpful for these conversations, but, in principle, with the satellite and ground-based measurements you could track the fluorescence in fields at different times of day," he says. This hourly tracking would not only allow researchers to detect the productivity of the plants, but it could also spot the first signs of plant stress—a factor that impacts crop prices and food security around the world.

"The measurements of SIF from OCO-2 greatly extend the science of this mission", says Paul Wennberg, R. Stanton Avery Professor of Atmospheric Chemistry and Environmental Science and Engineering, director of the Ronald and Maxine Linde Center for Global Environmental Science, and a member of the OCO-2 science team. "OCO-2 was designed to map carbon dioxide, and scientists plan to use these measurements to determine the underlying sources and sinks of this important gas. The new SIF measurements will allow us to diagnose the efficiency of the plants—a key component of the sinks of carbon dioxide."

By using OCO-2 to diagnose plant activity around the globe, this new research could also contribute to understanding the variability in crop primary productivity and also, eventually, the development of technologies that can improve crop efficiency—a goal that could greatly benefit humankind, Frankenberg says.

This project is funded by the Keck Institute for Space Studies and JPL. Wennberg is also an executive officer for the Environmental Science and Engineering (ESE) program. ESE is a joint program of the Division of Engineering and Applied Science, the division of Chemistry and Chemical Engineering, and the Division of Geological and Planetary Sciences.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news