Celebrating 11 Years of CARMA Discoveries

For more than a decade, large, moveable telescopes tucked away on a remote, high-altitude site in the Inyo Mountains, about 250 miles northeast of Los Angeles, have worked together to paint a picture of the universe through radio-wave observations.

Known as the Combined Array for Research in Millimeter-wave Astronomy, or CARMA, the telescopes formed one of the most powerful millimeter interferometers in the world. CARMA was created in 2004 through the merger of the Owens Valley Radio Observatory (OVRO) Millimeter Array and the Berkeley Illinois Maryland Association (BIMA) Array and initially consisted of 15 telescopes. In 2008, the University of Chicago joined CARMA, increasing the telescope count to 23.

Dalmation Drawing

An artist's depiction of a gamma ray burst, the most powerful explosive event in the universe. CARMA detected the millimeter-wavelength emission from the afterglow of the gamma ray burst 130427A only 18 hours after it first exploded on April 27, 2013. The CARMA observations revealed a surprise: in addition to the forward moving shock, CARMA showed the presence of a backward moving shock, or "reverse" shock, that had long been predicted, but never conclusively observed until now.
Credit: Gemini Observatory/AURA, artwork by Lynette Cook

CARMA's higher elevation, improved electronics, and greater number of connected antennae enabled more precise observations of radio emission from molecules and cold dust across the universe, leading to ground-breaking studies that encompass a range of cosmic objects and phenomena—including stellar birth, early planet formation, supermassive black holes, galaxies, galaxy mergers, and sudden, unexpected events such as gamma-ray bursts and supernova explosions.

"Over its lifetime, it has moved well beyond its initial goals both scientifically and technically," says Anneila Sargent (MS '67, PhD '78, both degrees in astronomy), the Ira S. Bowen Professor of Astronomy at Caltech and the first director of CARMA.

On April 3, CARMA probed the skies for the last time. The project ceased operations and its telescopes will be repurposed and integrated into other survey projects.

Here is a look back at some of CARMA's most significant discoveries and contributions to the field of astronomy.

Planet formation


Dalmation Drawing

These CARMA images highlight the range of morphologies observed in circumstellar disks, which may indicate that the disks are in different stages in the planet formation process, or that they are evolving along distinct pathways. The bottom row highlights the disk around the star LkCa 15, where CARMA detected a 40 AU diameter inner hole. The two-color Keck image (bottom right) reveals an infrared source along the inner edge of this hole. The infrared luminosity is consistent with a 6M Jupiter planet, which may have cleared the hole.
Credit: CARMA

Newly formed stars are surrounded by a rotating disk of gas and dust, known as a circumstellar disk. These disks provide the building materials for planetary systems like our own solar system, and can contain important clues about the planet formation process.

During its operation, CARMA imaged disks around dozens of young stars such as RY Tau and DG Tau. The observations revealed that circumstellar disks often are larger in size than our solar system and contain enough material to form Jupiter-size planets. Interestingly, these disks exhibit a variety of morphologies, and scientists think the different shapes reflect different stages or pathways of the planet formation process.

CARMA also helped gather evidence that supported planet formation theories by capturing some of the first images of gaps in circumstellar disks. According to conventional wisdom, planets can form in disks when stars are as young as half a million years old. Computer models show that if these so-called protoplanets are the size of Jupiter or larger, they should carve out gaps or holes in the rings through gravitational interactions with the disk material. In 2012, the team of OVRO executive director John Carpenter reported using CARMA to observe one such gap in the disk surrounding the young star LkCa 15. Observations by the Keck Observatory in Hawaii revealed an infrared source along the inner edge of the gap that was consistent with a planet that has six times the mass of Jupiter.

"Until ALMA"—the Atacama Large Millimeter/submillimeter Array in Chile, a billion-dollar international collaboration involving the United States, Europe, and Japan—"came along, CARMA produced the highest-resolution images of circumstellar disks at millimeter wavelengths," says Carpenter.

Star formation


Dalmation Drawing

A color image of the Whirlpool galaxy M51 from the Hubble Space Telescope (HST). A three composite of images taken at wavelengths of 4350 Angstroms (blue), 5550 Angstroms (green), and 6580 Angstroms (red). Bright regions in the red color are the regions of recent massive star formation, where ultraviolet photons from the massive stars ionize the surrounding gas which radiates the hydrogen recombination line emission. Dark lanes run along spiral arms, indicating the location where the dense interstellar medium is abundant.
Credit: Jin Koda

Stars form in "clouds" of gas, consisting primarily of molecular hydrogen, that contain as much as a million times the mass of the sun. "We do not understand yet how the diffuse molecular gas distributed over large scales flows to the small dense regions that ultimately form stars," Carpenter says.

Magnetic fields may play a key role in the star formation process, but obtaining observations of these fields, especially on small scales, is challenging. Using CARMA, astronomers were able to chart the direction of the magnetic field in the dense material that surrounds newly formed protostars by mapping the polarized thermal radiation from dust grains in molecular clouds. A CARMA survey of the polarized dust emission from 29 sources showed that magnetic fields in the dense gas are randomly aligned with outflowing gas entrained by jets from the protostars.

If the outflows emerge along the rotation axes of circumstellar disks, as has been observed in a few cases, the results suggest that, contrary to theoretical expectations, the circumstellar disks are not aligned with the fields in the dense gas from which they formed. "We don't know the punch line—are magnetic fields critical in the star formation process or not?—because, as always, the observations just raise more questions," Carpenter admits. "But the CARMA observations are pointing the direction for further observations with ALMA."

Molecular gas in galaxies


Dalmation Drawing

CARMA was used to image molecular gas in the nearby Andromeda galaxy. All stars form in dense clouds of molecular gas and thus to understand star formation it is important to analyze the properties of molecular clouds.
Credit: Andreas Schruba

The molecular gas in galaxies is the raw material for star formation. "Being able to study how much gas there is in a galaxy, how it's converted to stars, and at what rate is very important for understanding how galaxies evolve over time," Carpenter says.

By resolving the molecular gas reservoirs in local galaxies and measuring the mass of gas in distant galaxies that existed when the cosmos was a fraction of its current age, CARMA made fundamental contributions to understanding the processes that shape the observable universe.

For example, CARMA revealed the evolution, in the spiral galaxy M51, of giant molecular clouds (GMCs) driven by large-scale galactic structure and dynamics. CARMA was used to show that giant molecular clouds grow through coalescence and then break up into smaller clouds that may again come together in the future. Furthermore, the process can occur multiple times over a cloud's lifetime. This new picture of molecular cloud evolution is more complex than previous scenarios, which treated the clouds as discrete objects that dissolved back into the atomic interstellar medium after a certain period of time. "CARMA's imaging capability showed the full cycle of GMCs' dynamical evolution for the first time," Carpenter says.

The Milky Way's black hole

CARMA worked as a standalone array, but it was also able to function as part of very-long-baseline interferometry (VLBI), in which astronomical radio signals are gathered from multiple radio telescopes on Earth to create higher-resolution images than is possible with single telescopes working alone.

In this fashion, CARMA has been linked together with the Submillimeter Telescope in Arizona and the James Clerk Maxwell Telescope and Submillimeter Array in Hawaii to paint one of the most detailed pictures to date of the monstrous black hole at the heart of our Milky Way galaxy. The combined observations achieved an angular resolution of 40 microarcseconds—the equivalent of seeing a tennis ball on the moon.

"If you just used CARMA alone, then the best resolution you would get is 0.15 arcseconds. So VLBI improved the resolution by a factor of 3,750," Carpenter says.

Astronomers have used the VLBI technique to successfully detect radio signals emitted from gas orbiting just outside of this supermassive black hole's event horizon, the radius around the black hole where gravity is so strong that even light cannot escape. "These observations measured the size of the emitting region around the black hole and placed constraints on the accretion disk that is feeding the black hole," he explains.

In other work, VLBI observations showed that the black hole at the center of M87, a giant elliptical galaxy, is spinning.

Transients

CARMA also played an important role in following up "transients," objects that unexpectedly burst into existence and then dim and fade equally rapidly (on an astronomical timescale), over periods from seconds to years. Some transients can be attributed to powerful cosmic explosions such as gamma-ray bursts (GRBs) or supernovas, but the mechanisms by which they originate remain unexplained.

"By looking at transients at different wavelengths—and, in particular, looking at them soon after they are discovered—we can understand the progenitors that are causing these bursts," says Carpenter, who notes that CARMA led the field in observations of these events at millimeter wavelengths. Indeed, on April 27, 2013, CARMA detected the millimeter-wavelength emission from the afterglow of GRB 130427A only 18 hours after it first exploded. The CARMA observations revealed a surprise: in addition to the forward-moving shock, there was one moving backward. This "reverse" shock had long been predicted, but never conclusively observed.

Getting data on such unpredictable transient events is difficult at many observatories, because of logistics and the complexity of scheduling. "Targets of opportunity require flexibility on the part of the organization to respond to an event when it happens," says Sterl Phinney (BS '80, astronomy), professor of theoretical astrophysics and executive officer for astronomy and astrophysics at Caltech. "CARMA was excellent for this purpose, because it was so nimble."

Galaxy clusters


Dalmation Drawing

Multi-wavelength view of the redshift z=0.2 cluster MS0735+7421. Left to right: CARMA observations of the SZ effect, X-ray data from Chandra, radio data from the VLA, and a three-color composite of the three. The SZ image reveals a large-scale distortion of the intra-cluster medium coincident with X-ray cavities produced by a massive AGN outflow, an example of the wide dynamic-range, multi-wavelength cluster imaging enabled by CARMA.
Credit: Erik Leitch (University of Chicago, Owens Valley Radio Observatory)

Galaxy clusters are the largest gravitationally bound objects in the universe. CARMA studied galaxy clusters by taking advantage of a phenomenon known as the Sunyaev-Zel'dovich (SZ) effect. The SZ effect results when primordial radiation left over from the Big Bang, known as the cosmic microwave background (CMB), is scattered to higher energies after interacting with the hot ionized gas that permeates galaxy clusters. Using CARMA, astronomers recently confirmed a galaxy cluster candidate at redshifts of 1.75 and 1.9, making them the two most distant clusters for which an SZ effect has been measured.

"CARMA can detect the distortion in the CMB spectrum," Carpenter says. "We've observed over 100 clusters at very good resolution. These data have been very important to calibrating the relation between the SZ signal and the cluster mass, probing the structure of clusters, and helping discover the most distant clusters known in the universe."

Training the next generation

In addition to its many scientific contributions, CARMA also served as an important teaching facility for the next generation of astronomers. About 300 graduate students and postdoctoral researchers have cut their teeth on interferometry astronomy at CARMA over the years. "They were able to get hands-on experience in millimeter-wave astronomy at the observatory, something that is becoming more and more rare these days," Sargent says.

Tom Soifer (BS '68, physics), professor of physics and Kent and Joyce Kresa Leadership Chair of the Division of Physics, Mathematics and Astronomy, notes that many of those trainees now hold prestigious positions at the National Radio Astronomy Observatory (NRAO) or are professors at universities across the country, where they educate future scientists and engineers and help with the North American ALMA effort. "The United States is currently part of a tripartite international collaboration that operates ALMA. Most of the North American ALMA team trained either at CARMA or the Caltech OVRO Millimeter Array, CARMA's precursor," he says.

Looking ahead

Following CARMA's shutdown, the Cedar Flats sites will be restored to prior conditions, and the telescopes will be moved to OVRO. Although the astronomers closest to the observatory find the closure disappointing, Phinney takes a broader view, seeing the shutdown as part of the steady march of progress in astronomy. "CARMA was the cutting edge of high-frequency astronomy for the past decade. Now that mantle has passed to the global facility called ALMA, and Caltech will take on new frontiers."

Indeed, Caltech continues to push the technological frontier of astronomy through other projects. For example, Caltech Assistant Professor of Astronomy Greg Hallinan is leading the effort to build a Long Wavelength Array (LWA) station at OVRO that will instantaneously image the entire viewable sky every few seconds at low-frequency wavelengths to search for radio transients.

The success of CARMA and OVRO, Soifer says, gives him confidence that the LWA will also be successful. "We have a tremendously capable group of scientists and engineers. If anybody can make this challenging enterprise work, they can."

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Yeast Protein Network Could Provide Insights into Human Obesity

A team of biologists and a mathematician have identified and characterized a network composed of 94 proteins that work together to regulate fat storage in yeast.

"Removal of any one of the proteins results in an increase in cellular fat content, which is analogous to obesity," says study coauthor Bader Al-Anzi, a research scientist at Caltech.

The findings, detailed in the May issue of the journal PLOS Computational Biology, suggest that yeast could serve as a valuable test organism for studying human obesity.

"Many of the proteins we identified have mammalian counterparts, but detailed examinations of their role in humans has been challenging," says Al-Anzi. "The obesity research field would benefit greatly if a single-cell model organism such as yeast could be used—one that can be analyzed using easy, fast, and affordable methods."

Using genetic tools, Al-Anzi and his research assistant Patrick Arpp screened a collection of about 5,000 different mutant yeast strains and identified 94 genes that, when removed, produced yeast with increases in fat content, as measured by quantitating fat bands on thin-layer chromatography plates. Other studies have shown that such "obese" yeast cells grow more slowly than normal, an indication that in yeast as in humans, too much fat accumulation is not a good thing. "A yeast cell that uses most of its energy to synthesize fat that is not needed does so at the expense of other critical functions, and that ultimately slows down its growth and reproduction," Al-Anzi says.

When the team looked at the protein products of the genes, they discovered that those proteins are physically bonded to one another to form an extensive, highly clustered network within the cell.

Such a configuration cannot be generated through a random process, say study coauthors Sherif Gerges, a bioinformatician at Princeton University, and Noah Olsman, a graduate student in Caltech's Division of Engineering and Applied Science, who independently evaluated the details of the network. Both concluded that the network must have formed as the result of evolutionary selection.

In human-scale networks, such as the Internet, power grids, and social networks, the most influential or critical nodes are often, but not always, those that are the most highly connected.

The team wondered whether the fat-storage network exhibits this feature, and, if not, whether some other characteristics of the nodes would determine which ones were most critical. Then, they could ask if removing the genes that encode the most critical nodes would have the largest effect on fat content.

To examine this hypothesis further, Al-Anzi sought out the help of a mathematician familiar with graph theory, the branch of mathematics that considers the structure of nodes connected by edges, or pathways. "When I realized I needed help, I closed my laptop and went across campus to the mathematics department at Caltech," Al-Anzi recalls. "I walked into the only office door that was open at the time, and introduced myself."

The mathematician that Al-Anzi found that day was Christopher Ormerod, a Taussky–Todd Instructor in Mathematics at Caltech. Al-Anzi's data piqued Ormerod's curiosity. "I was especially struck by the fact that connections between the proteins in the network didn't appear to be random," says Ormerod, who is also a coauthor on the study. "I suspected there was something mathematically interesting happening in this network."

With the help of Ormerod, the team created a computer model that suggested the yeast fat network exhibits what is known as the small-world property. This is akin to a social network that contains many different local clusters of people who are linked to each other by mutual acquaintances, so that any person within the cluster can be reached via another person through a small number of steps.

This pattern is also seen in a well-known network model in graph theory, called the Watts-Strogatz model. The model was originally devised to explain the clustering phenomenon often observed in real networks, but had not previously been applied to cellular networks.

Ormerod suggested that graph theory might be used to make predictions that could be experimentally proven. For example, graph theory says that the most important nodes in the network are not necessarily the ones with the most connections, but rather those that have the most high-quality connections. In particular, nodes having many distant or circuitous connections are less important than those with more direct connections to other nodes, and, especially, direct connections to other important nodes. In mathematical jargon, these important nodes are said to have a high "centrality score."

"In network analysis, the centrality of a node serves as an indicator of its importance to the overall network," Ormerod says.

"Our work predicts that changing the proteins with the highest centrality scores will have a bigger effect on network output than average," he adds. And indeed, the researchers found that the removal of proteins with the highest predicted centrality scores produced yeast cells with a larger fat band than in yeast whose less-important proteins had been removed.

The use of centrality scores to gauge the relative importance of a protein in a cellular network is a marked departure from how proteins traditionally have been viewed and studied—that is, as lone players, whose characteristics are individually assessed. "It was a very local view of how cells functioned," Al-Anzi says. "Now we're realizing that the majority of proteins are parts of signaling networks that perform specific tasks within the cell."

Moving forward, the researchers think their technique could be applicable to protein networks that control other cellular functions—such as abnormal cell division, which can lead to cancer.

"These kinds of methods might allow researchers to determine which proteins are most important to study in order to understand diseases that arise when these functions are disrupted," says Kai Zinn, a professor of biology at Caltech and the study's senior author. "For example, defects in the control of cell growth and division can lead to cancer, and one might be able to use centrality scores to identify key proteins that regulate these processes. These might be proteins that had been overlooked in the past, and they could represent new targets for drug development."

Funding support for the paper, "Experimental and Computational Analysis of a Large Protein Network That Controls Fat Storage Reveals the Design Principles of a Signaling Network," was provided by the National Institutes of Health.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Using Radar Satellites to Study Icelandic Volcanoes and Glaciers

On August 16 of last year, Mark Simons, a professor of geophysics at Caltech, landed in Reykjavik with 15 students and two other faculty members to begin leading a tour of the volcanic, tectonic, and glaciological highlights of Iceland. That same day, a swarm of earthquakes began shaking the island nation—seismicity that was related to one of Iceland's many volcanoes, Bárðarbunga caldera, which lies beneath Vatnajökull ice cap.

As the trip proceeded, it became clear to scientists studying the event that magma beneath the caldera was feeding a dyke, a vertical sheet of magma slicing through the crust in a northeasterly direction. On August 29, as the Caltech group departed Iceland, the dike triggered an eruption in a lava field called Holuhraun, about 40 kilometers (roughly 25 miles) from the caldera just beyond the northern limit of the ice cap.

Although the timing of the volcanic activity necessitated some shuffling of the trip's activities, such as canceling planned overnight visits near what was soon to become the eruption zone, it was also scientifically fortuitous. Simons is one of the leaders of a Caltech/JPL project known as the Advanced Rapid Imaging and Analysis (ARIA) program, which aims to use a growing constellation of international imaging radar satellites that will improve situational awareness, and thus response, following natural disasters. Under the ARIA umbrella, Caltech and JPL/NASA had already formed a collaboration with the Italian Space Agency (ASI) to use its COSMO-SkyMed (CSK) constellation (consisting of four orbiting X-Band radar satellites) following such events.

Through the ASI/ARIA collaboration, the managers of CSK agreed to target the activity at Bárðarbunga for imaging using a technique called interferometric synthetic aperture radar (InSAR). As two CSK satellites flew over, separated by just one day, they bounced signals off the ground to create images of the surface of the glacier above the caldera. By comparing those two images in what is called an interferogram, the scientists could see how the glacier surface had moved during that intervening day. By the evening of August 28, Simons was able to pull up that first interferogram on his cell phone. It showed that the ice above the caldera was subsiding at a rate of 50 centimeters (more than a foot and a half) a day—a clear indication that the magma chamber below Bárðarbunga caldera was deflating.

The next morning, before his return flight to the United States, Simons took the data to researchers at the University of Iceland who were tracking Bárðarbunga's activity.

"At that point, there had been no recognition that the caldera was collapsing. Naturally, they were focused on the dyke and all the earthquakes to the north," says Simons. "Our goal was just to let them know about the activity at the caldera because we were really worried about the possibility of triggering a subglacial melt event that would generate a catastrophic flood."

Luckily, that flood never happened, but the researchers at the University of Iceland did ramp up observations of the caldera with radar altimetry flights and installed a continuous GPS station on the ice overlying the center of the caldera.

Last December, Icelandic researchers published a paper in Nature about the Bárðarbunga event, largely focusing on the dyke and eruption. Now, completing the picture, Simons and his colleagues have developed a model to describe the collapsing caldera and the earthquakes produced by that action. The new findings appear in the journal Geophysical Journal International.

"Over a span of two months, there were more than 50 magnitude-5 earthquakes in this area. But they didn't look like regular faulting—like shearing a crack," says Simons. "Instead, the earthquakes looked like they resulted from movement inward along a vertical axis and horizontally outward in a radial direction—like an aluminum can when it's being crushed."

To try to determine what was actually generating the unusual earthquakes, Bryan Riel, a graduate student in Simons's group and lead author on the paper, used the original one-day interferogram of the Bárðarbunga area along with four others collected by CSK in September and October. Most of those one-day pairs spanned at least one of the earthquakes, but in a couple of cases, they did not. That allowed Riel to isolate the effect of the earthquakes and determine that most of the subsidence of the ice was due to what is called aseismic activity—the kind that does not produce big earthquakes. Thus, Riel was able to show that the earthquakes were not the primary cause of the surface deformation inferred from the satellite radar data.

"What we know for sure is that the magma chamber was deflating as the magma was feeding the dyke going northward," says Riel. "We have come up with two different models to explain what was actually generating the earthquakes."

In the first scenario, because the magma chamber deflated, pressure from the overlying rock and ice caused the caldera to collapse, producing the unusual earthquakes. This mechanism has been observed in cases of collapsing mines (e.g., the Crandall Canyon Mine in Utah).

The second model hypothesizes that there is a ring fault arcing around a significant portion of the caldera. As the magma chamber deflated, the large block of rock above it dropped but periodically got stuck on portions of the ring fault. As the block became unstuck, it caused rapid slip on the curved fault, producing the unusual earthquakes.

"Because we had access to these satellite images as well as GPS data, we have been able to produce two potential interpretations for the collapse of a caldera—a rare event that occurs maybe once every 50 to 100 years," says Simons. "To be able to see this documented as it's happening is truly phenomenal."

Additional authors on the paper, "The collapse of Bárðarbunga caldera, Iceland," are Hiroo Kanamori, John E. and Hazel S. Smits Professor of Geophysics, Emeritus, at Caltech; Pietro Milillo of the University of Basilicata in Potenza, Italy; Paul Lundgren of JPL; and Sergey Samsonov of the Canada Centre for Mapping and Earth Observation. The work was supported by a NASA Earth and Space Science Fellowship and by the Caltech/JPL President's and Director's Fund.

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Using Radar Satellites to Study Volcanoes
Listing Title: 
Using Radar Satellites to Study Volcanoes
Writer: 
Exclude from News Hub: 
No
Short Title: 
Using Radar Satellites to Study Volcanoes
News Type: 
Research News

Caltech Astronomers Observe a Supernova Colliding with Its Companion Star

Type Ia supernovae, one of the most dazzling phenomena in the universe, are produced when small dense stars called white dwarfs explode with ferocious intensity. At their peak, these supernovae can outshine an entire galaxy. Although thousands of supernovae of this kind were found in the last decades, the process by which a white dwarf becomes one has been unclear.

That began to change on May 3, 2014, when a team of Caltech astronomers working on a robotic observing system known as the intermediate Palomar Transient Factory (iPTF)—a multi-institute collaboration led by Shrinivas Kulkarni, the John D. and Catherine T. MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories—discovered a Type Ia supernova, designated iPTF14atg, in nearby galaxy IC831, located 300 million light-years away.

The data that were immediately collected by the iPTF team lend support to one of two competing theories about the origin of white dwarf supernovae, and also suggest the possibility that there are actually two distinct populations of this type of supernova.

The details are outlined in a paper with Caltech graduate student Yi Cao the lead author, appearing May 21 in the journal Nature.

Type Ia supernovae are known as "standardizable candles" because they allow astronomers to gauge cosmic distances by how dim they appear relative to how bright they actually are. It is like knowing that, from one mile away, a light bulb looks 100 times dimmer than another located only one-tenth of a mile away. This consistency is what made these stellar objects instrumental in measuring the accelerating expansion of the universe in the 1990s, earning three scientists the Nobel Prize in Physics in 2011.

There are two competing origin theories, both starting with the same general scenario: the white dwarf that eventually explodes is one of a pair of stars orbiting around a common center of mass. The interaction between these two stars, the theories say, is responsible for triggering supernova development. What is the nature of that interaction? At this point, the theories diverge.

According to one theory, the so-called double-degenerate model, the companion to the exploding white dwarf is also a white dwarf, and the supernova explosion initiates when the two similar objects merge.

However, in the second theory, called the single-degenerate model, the second star is instead a sunlike star—or even a red giant, a much larger type of star. In this model, the white dwarf's powerful gravity pulls, or accretes, material from the second star. This process, in turn, increases the temperature and pressure in the center of the white dwarf until a runaway nuclear reaction begins, ending in a dramatic explosion.

The difficulty in determining which model is correct stems from the facts that supernova events are very rare—occurring about once every few centuries in our galaxy—and that the stars involved are very dim before the explosions.

That is where the iPTF comes in. From atop Palomar Mountain in Southern California, where it is mounted on the 48-inch Samuel Oschin Telescope, the project's fully automated camera optically surveys roughly 1000 square degrees of sky per night (approximately 1/20th of the visible sky above the horizon), looking for transients—objects, including Type Ia supernovae, whose brightness changes over timescales that range from hours to days.

On May 3, the iPTF took images of IC831 and transmitted the data for analysis to computers at the National Energy Research Scientific Computing Center, where a machine-learning algorithm analyzed the images and prioritized real celestial objects over digital artifacts. Because this first-pass analysis occurred when it was nighttime in the United States but daytime in Europe, the iPTF's European and Israeli collaborators were the first to sift through the prioritized objects, looking for intriguing signals. After they spotted the possible supernova—a signal that had not been visible in the images taken just the night before—the European and Israeli team alerted their U.S. counterparts, including Caltech graduate student and iPTF team member Yi Cao.

Cao and his colleagues then mobilized both ground- and space-based telescopes, including NASA's Swift satellite, which observes ultraviolet (UV) light, to take a closer look at the young supernova.

"My colleagues and I spent many sleepless nights on designing our system to search for luminous ultraviolet emission from baby Type Ia supernovae," says Cao. "As you can imagine, I was fired up when I first saw a bright spot at the location of this supernova in the ultraviolet image. I knew this was likely what we had been hoping for."

UV radiation has higher energy than visible light, so it is particularly suited to observing very hot objects like supernovae (although such observations are possible only from space, because Earth's atmosphere and ozone later absorbs almost all of this incoming UV). Swift measured a pulse of UV radiation that declined initially but then rose as the supernova brightened. Because such a pulse is short-lived, it can be missed by surveys that scan the sky less frequently than does the iPTF.

This observed ultraviolet pulse is consistent with a formation scenario in which the material ejected from a supernova explosion slams into a companion star, generating a shock wave that ignites the surrounding material. In other words, the data are in agreement with the single-degenerate model.

Back in 2010, Daniel Kasen, an associate professor of astronomy and physics at UC Berkeley and Lawrence Berkeley National Laboratory, used theoretical calculations and supercomputer simulations to predict just such a pulse from supernova-companion collisions. "After I made that prediction, a lot of people tried to look for that signature," Kasen says. "This is the first time that anyone has seen it. It opens up an entirely new way to study the origins of exploding stars."

According to Kulkarni, the discovery "provides direct evidence for the existence of a companion star in a Type Ia supernova, and demonstrates that at least some Type Ia supernovae originate from the single-degenerate channel."

Although the data from supernova iPTF14atg support it being made by a single-degenerate system, other Type Ia supernovae may result from double-degenerate systems. In fact, observations in 2011 of SN2011fe, another Type Ia supernova discovered in the nearby galaxy Messier 101 by PTF (the precursor to the iPTF), appeared to rule out the single-degenerate model for that particular supernova. And that means that both theories actually may be valid, says Caltech professor of theoretical astrophysics Sterl Phinney, who was not involved in the research. "The news is that it seems that both sets of theoretical models are right, and there are two very different kinds of Type Ia supernovae."

"Both rapid discovery of supernovae in their infancy by iPTF, and rapid follow-up by the Swift satellite, were essential to unveil the companion to this exploding white dwarf. Now we have to do this again and again to determine the fractions of Type Ia supernovae akin to different origin theories," says iPTF team member Mansi Kasliwal, who will join the Caltech astronomy faculty as an assistant professor in September 2015.

The iPTF project is a scientific collaboration between Caltech; Los Alamos National Laboratory; the University of Wisconsin–Milwaukee; the Oskar Klein Centre in Sweden; the Weizmann Institute of Science in Israel; the TANGO Program of the University System of Taiwan; and the Kavli Institute for the Physics and Mathematics of the Universe in Japan. The Caltech team is funded in part by the National Science Foundation.

Frontpage Title: 
Caltech Astronomers See Supernova Collide with Companion Star
Listing Title: 
Caltech Astronomers See Supernova Collide with Companion Star
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Dedication of Advanced LIGO

The Advanced LIGO Project, a major upgrade that will increase the sensitivity of the Laser Interferometer Gravitational-wave Observatories instruments by a factor of 10 and provide a 1,000-fold increase in the number of astrophysical candidates for gravitational wave signals, was officially dedicated today in a ceremony held at the LIGO Hanford facility in Richland, Washington.

LIGO was designed and is operated by Caltech and MIT, with funding from the National Science Foundation (NSF). Advanced LIGO, also funded by the NSF, will begin its first searches for gravitational waves in the fall of this year.

The dedication ceremony featured remarks from Caltech president Thomas F. Rosenbaum, the Sonja and William Davidow Presidential Chair and professor of physics; Professor of Physics Tom Soifer (BS '68), the Kent and Joyce Kresa Leadership Chair of Caltech's Division of Physics, Mathematics and Astronomy; and NSF director France Córdova (PhD '79).

"We've spent the past seven years putting together the most sensitive gravitational-wave detector ever built. Commissioning the detectors has gone extremely well thus far, and we are looking forward to our first science run with Advanced LIGO beginning later in 2015.  This is a very exciting time for the field," says Caltech's David H. Reitze, executive director of the LIGO Project.

"Advanced LIGO represents a critically important step forward in our continuing effort to understand the extraordinary mysteries of our universe," says Córdova. "It gives scientists a highly sophisticated instrument for detecting gravitational waves, which we believe carry with them information about their dynamic origins and about the nature of gravity that cannot be obtained by conventional astronomical tools."

"This is a particularly thrilling event, marking the opening of a new window on the universe, one that will allow us to see the final cataclysmic moments in the lives of stars that would otherwise be invisible to us," says Soifer.

Predicted by Albert Einstein in 1916 as a consequence of his general theory of relativity, gravitational waves are ripples in the fabric of space and time produced by violent events in the distant universe—for example, by the collision of two black holes or by the cores of supernova explosions. Gravitational waves are emitted by accelerating masses much in the same way as radio waves are produced by accelerating charges, such as electrons in antennas. As they travel to Earth, these ripples in the space-time fabric bring with them information about their violent origins and about the nature of gravity that cannot be obtained by other astronomical tools.

Although they have not yet been detected directly, the influence of gravitational waves on a binary pulsar system (two neutron stars orbiting each other) has been measured accurately and is in excellent agreement with the predictions. Scientists therefore have great confidence that gravitational waves exist. But a direct detection will confirm Einstein's vision of the waves and allow a fascinating new window into cataclysms in the cosmos.

LIGO was originally proposed as a means of detecting these gravitational waves. Each of the 4-km-long L-shaped LIGO interferometers (one each at LIGO Hanford and at the LIGO observatory in Livingston, Louisiana) use a laser split into two beams that travel back and forth down long arms (which are beam tubes from which the air has been evacuated). The beams are used to monitor the distance between precisely configured mirrors. According to Einstein's theory, the relative distance between the mirrors will change very slightly when a gravitational wave passes by. 

The original configuration of LIGO was sensitive enough to detect a change in the lengths of the 4-km arms by a distance one-thousandth the size of a proton; this is like accurately measuring the distance from Earth to the nearest star—3 light years—to within the width of a human hair. Advanced LIGO, which will utilize the infrastructure of LIGO, will be 10 times more sensitive.

Included in the upgrade were changes in the lasers (180-watt highly stabilized systems), optics (40-kg fused-silica "test mass" mirrors suspended by fused-silica fibers), seismic isolation systems (using inertial sensing and feedback), and in how the microscopic motion (less than one billionth of one billionth of a meter) of the test masses is detected.

The change of more than a factor of 10 in sensitivity also comes with a significant increase in the sensitive frequency range. This will allow Advanced LIGO to look at the last minutes of the life of pairs of massive black holes as they spiral closer, coalesce into one larger black hole, and then vibrate much like two soap bubbles becoming one. It will also allow the instrument to pinpoint periodic signals from the many known pulsars that radiate in the range from 500 to 1,000 Hertz (frequencies that correspond to high notes on an organ).

Advanced LIGO will also be used to search for the gravitational cosmic background—allowing tests of theories about the development of the universe only 10-35 seconds after the Big Bang.

LIGO research is carried out by the LIGO Scientific Collaboration (LSC), a group of some 950 scientists at universities around the United States and in 15 other countries. The LSC network includes the LIGO interferometers and the GEO600 interferometer, located near Hannover, Germany, and theand and the LSC works jointly with the Virgo Collaboration—which designed and constructed the 3-km-long Virgo interferometer located in Cascina, Italy—to analyze data from the LIGO, GEO, and Virgo interferometers.

Several international partners including the Max Planck Institute for Gravitational Physics, the Albert Einstein Institute, the Laser Zentrum Hannover, and the Leibniz Universität Hannover in Germany; an Australian consortium of universities, led by the Australian National University and the University of Adelaide, and supported by the Australian Research Council; partners in the United Kingdom funded by the Science and Technology Facilities Council; and the University of Florida and Columbia University, provided significant contributions of equipment, labor, and expertise.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Controlling a Robotic Arm with a Patient's Intentions

Neural prosthetic devices implanted in the brain's movement center, the motor cortex, can allow patients with amputations or paralysis to control the movement of a robotic limb—one that can be either connected to or separate from the patient's own limb. However, current neuroprosthetics produce motion that is delayed and jerky—not the smooth and seemingly automatic gestures associated with natural movement. Now, by implanting neuroprosthetics in a part of the brain that controls not the movement directly but rather our intent to move, Caltech researchers have developed a way to produce more natural and fluid motions.

In a clinical trial, the Caltech team and colleagues from Keck Medicine of USC have successfully implanted just such a device in a patient with quadriplegia, giving him the ability to perform a fluid hand-shaking gesture and even play "rock, paper, scissors" using a separate robotic arm.

The results of the trial, led by principal investigator Richard Andersen, the James G. Boswell Professor of Neuroscience, and including Caltech lab members Tyson Aflalo, Spencer Kellis, Christian Klaes, Brian Lee, Ying Shi and Kelsie Pejsa, are published in the May 22 edition of the journal Science.

"When you move your arm, you really don't think about which muscles to activate and the details of the movement—such as lift the arm, extend the arm, grasp the cup, close the hand around the cup, and so on. Instead, you think about the goal of the movement. For example, 'I want to pick up that cup of water,'" Andersen says. "So in this trial, we were successfully able to decode these actual intents, by asking the subject to simply imagine the movement as a whole, rather than breaking it down into myriad components."

For example, the process of seeing a person and then shaking his hand begins with a visual signal (for example, recognizing someone you know) that is first processed in the lower visual areas of the cerebral cortex. The signal then moves up to a high-level cognitive area known as the posterior parietal cortex (PPC). Here, the initial intent to make a movement is formed. These intentions are then transmitted to the motor cortex, through the spinal cord, and on to the arms and legs where the movement is executed.

High spinal cord injuries can cause quadriplegia in some patients because movement signals cannot get from the brain to the arms and legs. As a solution, earlier neuroprosthetic implants used tiny electrodes to detect and record movement signals at their last stop before reaching the spinal cord: the motor cortex.

The recorded signal is then carried via wire bundles from the patient's brain to a computer, where it is translated into an instruction for a robotic limb. However, because the motor cortex normally controls many muscles, the signals tend to be detailed and specific. The Caltech group wanted to see if the simpler intent to shake the hand could be used to control the prosthetic limb, instead of asking the subject to concentrate on each component of the handshake—a more painstaking and less natural approach.

Andersen and his colleagues wanted to improve the versatility of movement that a neuroprosthetic can offer by recording signals from a different brain region—the PPC. "The PPC is earlier in the pathway, so signals there are more related to movement planning—what you actually intend to do—rather than the details of the movement execution," he says. "We hoped that the signals from the PPC would be easier for the patients to use, ultimately making the movement process more intuitive. Our future studies will investigate ways to combine the detailed motor cortex signals with more cognitive PPC signals to take advantage of each area's specializations."

In the clinical trial, designed to test the safety and effectiveness of this new approach, the Caltech team collaborated with surgeons at Keck Medicine of USC and the rehabilitation team at Rancho Los Amigos National Rehabilitation Center. The surgeons implanted a pair of small electrode arrays in two parts of the PPC of a quadriplegic patient. Each array contains 96 active electrodes that, in turn, each record the activity of a single neuron in the PPC. The arrays were connected by a cable to a system of computers that processed the signals, decoded the intent of the subject, and controlled output devices that included a computer cursor and a robotic arm developed by collaborators at Johns Hopkins University.

After recovering from the surgery, the patient was trained to control the computer cursor and the robotic arm with his mind. Once training was complete, the researchers saw just what they were hoping for: intuitive movement of the robotic arm.

"For me, the most exciting moment of the trial was when the participant first moved the robotic limb with his thoughts. He had been paralyzed for over 10 years, and this was the first time since his injury that he could move a limb and reach out to someone. It was a thrilling moment for all of us," Andersen says.

"It was a big surprise that the patient was able to control the limb on day one—the very first day he tried," he adds. "This attests to how intuitive the control is when using PPC activity."

The patient, Erik G. Sorto, was also thrilled with the quick results: "I was surprised at how easy it was," he says. "I remember just having this out-of-body experience, and I wanted to just run around and high-five everybody."

Over time, Sorto continued to refine his control of his robotic arm, thus providing the researchers with more information about how the PPC works. For example, "we learned that if he thought, 'I should move my hand over toward to the object in a certain way'—trying to control the limb—that didn't work," Andersen says. "The thought actually needed to be more cognitive. But if he just thought, 'I want to grasp the object,' it was much easier. And that is exactly what we would expect from this area of the brain."

This better understanding of the PPC will help the researchers improve neuroprosthetic devices of the future, Andersen says. "What we have here is a unique window into the workings of a complex high-level brain area as we work collaboratively with our subject to perfect his skill in controlling external devices."

"The primary mission of the USC Neurorestoration Center is to take advantage of resources from our clinical programs to create unique opportunities to translate scientific discoveries, such as those of the Andersen Lab at Caltech, to human patients, ultimately turning transformative discoveries into effective therapies," says center director Charles Y. Liu, professor of neurological surgery, neurology, and biomedical engineering at USC, who led the surgical implant procedure and the USC/Rancho Los Amigos team in the collaboration.

"In taking care of patients with neurological injuries and diseases—and knowing the significant limitations of current treatment strategies—it is clear that completely new approaches are necessary to restore function to paralyzed patients. Direct brain control of robots and computers has the potential to dramatically change the lives of many people," Liu adds.

Dr. Mindy Aisen, the chief medical officer at Rancho Los Amigos who led the study's rehabilitation team, says that advancements in prosthetics like these hold promise for the future of patient rehabilitation. "We at Rancho are dedicated to advancing rehabilitation through new assistive technologies, such as robotics and brain-machine interfaces. We have created a unique environment that can seamlessly bring together rehabilitation, medicine, and science as exemplified in this study," she says.

Although tasks like shaking hands and playing "rock, paper, scissors" are important to demonstrate the capability of these devices, the hope is that neuroprosthetics will eventually enable patients to perform more practical tasks that will allow them to regain some of their independence.

"This study has been very meaningful to me. As much as the project needed me, I needed the project. The project has made a huge difference in my life. It gives me great pleasure to be part of the solution for improving paralyzed patients' lives," Sorto says. "I joke around with the guys that I want to be able to drink my own beer—to be able to take a drink at my own pace, when I want to take a sip out of my beer and to not have to ask somebody to give it to me. I really miss that independence. I think that if it was safe enough, I would really enjoy grooming myself—shaving, brushing my own teeth. That would be fantastic." 

To that end, Andersen and his colleagues are already working on a strategy that could enable patients to perform these finer motor skills. The key is to be able to provide particular types of sensory feedback from the robotic arm to the brain.

Although Sorto's implant allowed him to control larger movements with visual feedback, "to really do fine dexterous control, you also need feedback from touch," Andersen says. "Without it, it's like going to the dentist and having your mouth numbed. It's very hard to speak without somatosensory feedback." The newest devices under development by Andersen and his colleagues feature a mechanism to relay signals from the robotic arm back into the part of the brain that gives the perception of touch.

"The reason we are developing these devices is that normally a quadriplegic patient couldn't, say, pick up a glass of water to sip it, or feed themselves. They can't even do anything if their nose itches. Seemingly trivial things like this are very frustrating for the patients," Andersen says. "This trial is an important step toward improving their quality of life."

The results of the trial were published in a paper titled, "Decoding Motor Imagery from the Posterior Parietal Cortex of a Tetraplegic Human." The implanted device and signal processors used in the Caltech-led clinical trial were the NeuroPort Array and NeuroPort Bio-potential Signal Processors developed by Blackrock Microsystems in Salt Lake City, Utah. The robotic arm used in the trial was the Modular Prosthetic Limb, developed at the Applied Physics Laboratory at Johns Hopkins. Sorto was recruited to the trial by collaborators at Rancho Los Amigos National Rehabilitation Center and at Keck Medicine of USC. This trial was funded by National Institutes of Health, the Boswell Foundation, the Department of Defense, and the USC Neurorestoration Center.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Do Fruit Flies Have Emotions?

A fruit fly starts buzzing around food at a picnic, so you wave your hand over the insect and shoo it away. But when the insect flees the scene, is it doing so because it is actually afraid? Using fruit flies to study the basic components of emotion, a new Caltech study reports that a fly's response to a shadowy overhead stimulus might be analogous to a negative emotional state such as fear—a finding that could one day help us understand the neural circuitry involved in human emotion.

The study, which was done in the laboratory of David Anderson, Seymour Benzer Professor of Biology and an investigator with the Howard Hughes Medical Institute, was published online May 14 in the journal Current Biology.

Insects are an important model for the study of emotion; although mice are closer to humans on the evolutionary family tree, the fruit fly has a much simpler neurological system that is easier to study. However, studying emotions in insects or any other animal can also be tricky. Because researchers know the experience of human emotion, they might anthropomorphize those of an insect—just as you might assume that the shooed-away fly left your plate because it was afraid of your hand. But there are several problems with such an assumption, says postdoctoral scholar William T. Gibson, first author of the paper.

"There are two difficulties with taking your own experiences and then saying that maybe these are happening in a fly. First, a fly's brain is very different from yours, and second, a fly's evolutionary history is so different from yours that even if you could prove beyond any doubt that flies have emotions, those emotions probably wouldn't be the same ones that you have," he says. "For these reasons, in our study, we wanted to take an objective approach."

Anderson and Gibson and their colleagues did this by deconstructing the idea of an emotion into basic building blocks—so-called emotion primitives, a concept previously developed by Anderson and Ralph Adolphs, Bren Professor of Psychology and Neuroscience and professor of biology.

"There has been ongoing debate for decades about what 'emotion' means, and there is no generally accepted definition. In an article that Ralph Adolphs and I recently wrote, we put forth the view that emotions are a type of internal brain state with certain general properties that can exist independently of subjective, conscious feelings, which can only be studied in humans," Anderson says. "That means we can study such brain states in animal models like flies or mice without worrying about whether they have 'feelings' or not. We use the behaviors that express those states as a readout."

Gibson explains by analogy that emotions can be broken down into these emotion primitives much as a secondary color, such as orange, can be separated into two primary colors, yellow and red. "And if we can show that fruit flies display all of these separate but necessary primitives, we then may be able to make the argument that they also have an emotion, like fear."

The emotion primitives analyzed in the fly study can be understood in the context of a stimulus associated with human fear: the sound of a gunshot. If you hear a gun fire, the sound may trigger a negative feeling. This feeling, a primitive called valence, will probably cause you to behave differently for several minutes afterward. This is a primitive called persistence. Repeated exposure to the stimulus should also produce a greater emotional response—a primitive called scalability; for example, the sound of 10 gunshots would make you more afraid than the sound of one shot.

Gibson says that another primitive of fear is that it is generalized to different contexts, meaning that if you were eating lunch or were otherwise occupied when the gun fired, the fear would take over, distracting you from your lunch. Trans-situationality is another primitive that could cause you to produce the same fearful reaction in response to an unrelated stimulus—such as the sound of a car backfiring.

The researchers chose to study these five primitives by observing the insects in the presence of a fear-inducing stimulus. Because defensive behavioral responses to overhead visual threats are common in many animals, the researchers created an apparatus that would pass a dark paddle over the flies' habitat. The flies' movements were then tracked using a software program created in collaboration with Pietro Perona, the Allen E. Puckett Professor of Electrical Engineering.

The researchers analyzed the flies' responses to the stimulus and found that the insects displayed all of these emotion primitives. For example, responses were scalable: when the paddle passed overhead, the flies would either freeze, or jump away from the stimulus, or enter a state of elevated arousal, and each response increased with the number of times the stimulus was delivered. And when hungry flies were gathered around food, the stimulus would cause them to leave the food for several seconds and run around the arena until their state of elevated arousal decayed and they returned to the food—exhibiting the primitives of context generalization and persistence.

"These experiments provide objective evidence that visual stimuli designed to mimic an overhead predator can induce a persistent and scalable internal state of defensive arousal in flies, which can influence their subsequent behavior for minutes after the threat has passed," Anderson says. "For us, that's a big step beyond just casually intuiting that a fly fleeing a visual threat must be 'afraid,' based on our anthropomorphic assumptions. It suggests that the flies' response to the threat is richer and more complicated than a robotic-like avoidance reflex."

In the future, the researchers say that they plan to combine the new technique with genetically based techniques and imaging of brain activity to identify the neural circuitry that underlies these defensive behaviors. Their end goal is to identify specific populations of neurons in the fruit fly brain that are necessary for emotion primitives—and whether these functions are conserved in higher organisms, such as mice or even humans.

Although the presence of these primitives suggests that the flies might be reacting to the stimulus based on some kind of emotion, the researchers are quick to point out that this new information does not prove—nor did it set out to establish—that flies can experience fear, or happiness, or anger, or any other feelings.

"Our work can get at questions about mechanism and questions about the functional properties of emotion states, but we cannot get at the question of whether or not flies have feelings," Gibson says.

The study, titled "Behavioral Responses to a Repetitive Stimulus Express a Persistent State of Defensive Arousal in Drosophila," was published in the journal Current Biology. In addition to Gibson, Anderson, and Perona, Caltech coauthors include graduate student Carlos Gonzalez, undergraduate Rebecca Du, former research assistants Conchi Fernandez and Panna Felsen (BS '09, MS '10), and former postdoctoral scholar Michael Maire. Coauthors Lakshminarayanan Ramasamy and Tanya Tabachnik are from the Janelia Research Campus of the Howard Hughes Medical Institute (HHMI). The work was funded by the National Institutes of Health, HHMI, and the Gordon and Betty Moore Foundation.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Powerful New Radio Telescope Array Searches the Entire Sky 24/7

A new radio telescope array developed by a consortium led by Caltech and now operating at the Owens Valley Radio Observatory has the ability to image simultaneously the entire sky at radio wavelengths with unmatched speed, helping astronomers to search for objects and phenomena that pulse, flicker, flare, or explode.

The new tool, the Owens Valley Long Wavelength Array (OV-LWA), is already producing unprecedented videos of the radio sky. Astronomers hope that it will help them piece together a more complete picture of the early universe and learn about extrasolar space weather—the interaction between nearby stars and their orbiting planets.

The consortium includes astronomers from Caltech, JPL, Harvard University, the University of New Mexico, Virginia Tech, and the Naval Research Laboratory.

"Our new telescope lets us see the entire sky all at once, and we can image everything instantaneously," says Gregg Hallinan, an assistant professor of astronomy at Caltech and OV-LWA's principal investigator.

Combining the observing power of more than 250 antennas spread out over a desert area equivalent to about 450 football fields, the OV-LWA is uniquely sensitive to faint variable radio signals such as those produced by pulsars, solar flares, and auroras on distant planets. A single radio antenna would have to be a hundred meters wide to achieve the same sensitivity (the giant radio telescope at Arecibo Observatory in Puerto Rico is 305 meters in diameter). However, a telescope's field of view is governed by the size of its dish, and such an enormous instrument still would only see a tiny fraction of the entire sky.

"Our technique delivers the best of both worlds, offering good sensitivity and an enormous field of view," says Hallinan.

Operating at full speed, the new array produces 25 terabytes of data every day, making it one of the most data-intensive telescopes in the world. For comparative purposes, it would take more than 5,000 DVDs to store just one day's worth of the array's data. A supercomputer developed by a group led by Lincoln Greenhill of Harvard University for the NSF-funded Large-Aperture Experiment to Detect the Dark Ages (LEDA) delivers these data. It uses graphics processing units similar to those used in modern computer games to combine signals from all of the antennas in real time. These combined signals are then sent to a second computer cluster, the All-Sky Transient Monitor (ASTM) at Caltech and JPL, which produces all-sky images in real-time.

Hallinan says that the OV-LWA holds great promise for cosmological studies and may allow astronomers to watch the early universe as it evolved over time. Scientists might then be able to learn how and when the universe's first stars, galaxies, and black holes formed. But the formative period during which these events occurred is shrouded in a fog of hydrogen that is opaque to most radiation. Even the most powerful optical and infrared telescopes cannot peer through that fog. By observing the sky at radio frequencies, however, astronomers may be able to detect weak radio signals from the time of the births of those first stars and galaxies.

"The biggest challenge is that this weak radiation from the early universe is obscured by the radio emission from our own galaxy, which is about a million times brighter than the signal itself, so you have to have very carefully measured data to see it," says Hallinan. "That's one of the primary goals of our collaboration—to try to get the first statistical measure of that weak signal from our cosmic dawn."

If they are able to detect that signal, the researchers could be able to learn about the formation of the first stars and galaxies, their evolution, and how they eventually ionized the surrounding intergalactic medium, to give us the universe we observe today. "This new field offers the opportunity to see the universe evolve, in a cosmological movie of sorts," Hallinan says.

But Hallinan is most excited about using the array to study space weather in nearby stellar systems similar to our own. Our own sun occasionally releases bursts of magnetic energy from its atmosphere, shooting X-rays and other forms of radiation outward in large flares. Sometimes these flares are accompanied by shock waves called coronal mass ejections, which send particles and magnetic fields toward Earth and the other planets. Light displays, or auroras, are produced when those particles interact with atoms in a planet's atmosphere. These space weather events also occur on other stars, and Hallinan hopes to use the OV-LWA to study them.

"We want to detect coronal mass ejections on other stars with our array and then use other telescopes to image them," he says. "We're trying to learn about this kind of event on stars other than the sun and show that there are auroras caused by these events on planets outside our solar system."

The majority of stars in our local corner of the Milky Way are so-called M dwarfs, stars that are much smaller than our own sun and yet potentially more magnetically active. Thus far, surveys of exoplanets suggest that most such M dwarfs harbor small rocky planets. "That means it is very likely that the nearest habitable planet is orbiting an M dwarf," Hallinan says. "However, the possibility of a higher degree of activity, with extreme flaring and intense coronal mass ejections, may have an impact on the atmosphere of such a planet and affect habitability."

A coronal mass ejection from an M dwarf would shower charged particles on the atmosphere and magnetic field of an orbiting planet, potentially leading to aurorae and periodic radio bursts. Astronomers could determine the strength of the planet's magnetic field by measuring the intensity and duration of such an event. And since magnetic fields may protect planets from the activity of their host stars, many such measurements would shed light on the potential habitability of these planets.

For decades, astronomers have been trying to detect radio bursts associated with extrasolar space weather. This is challenging for two reasons. First, the radio emission pulses as the planet rotates, flashing like a lighthouse beacon, so astronomers have to be looking at just the right time to catch the flash. Second, the radio emission may brighten significantly as the velocity of a star's stellar wind increases during a coronal mass ejection.

"You need to be observing at that exact moment when the beacon is pointed in our direction and the star's stellar wind has picked up. You might need to monitor that planet for a decade to get that one event where it is really bright," Hallinan says. "So you need to be able to not just observe at random intervals but to monitor all these planets continuously. Our new array allows us to do that."

 The OV-LWA was initiated through the support of Deborah Castleman (MS '86) and Harold Rosen (MS '48; PhD '51)

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Imaging the Entire Radio Sky 24/7
Listing Title: 
Imaging the Entire Radio Sky 24/7
Writer: 
Exclude from News Hub: 
No
Short Title: 
Powerful New Radio Telescope Array
News Type: 
Research News

New Thin, Flat Lenses Focus Light as Sharply as Curved Lenses

Lenses appear in all sorts of everyday objects, from prescription eyeglasses to cell-phone cameras. Typically, lenses rely on a curved shape to bend and focus light. But in the tight spaces inside consumer electronics and fiber-optic systems, these rounded lenses can take up a lot of room. Over the last few years, scientists have started crafting tiny flat lenses that are ideal for such close quarters. To date, however, thin microlenses have failed to transmit and focus light as efficiently as their bigger, curved counterparts.

Caltech engineers have created flat microlenses with performance on a par with conventional, curved lenses. These lenses can be manufactured using industry-standard techniques for making computer chips, setting the stage for their incorporation into electronics such as cameras and microscopes, as well as in novel devices.

"The lenses we use today are bulky," says Amir Arbabi, a senior researcher in the Division of Engineering and Applied Science, and lead author of the paper. "The structure we have chosen for these flat lenses can open up new areas of application that were not available before."

The research, led by Andrei Faraon (BS '04), assistant professor of applied physics and material science, appears in the May 7 issue of Nature Communications.

The new lens type is known as a high-contrast transmitarray. Made of silicon, the lens is just a millionth of a meter thick, or about a hundredth of the diameter of a human hair, and it is studded with silicon "posts" of varying sizes. When imaged under a scanning electron microscope, the lens resembles a forest cleared for timber, with only stumps (the posts) remaining. Depending on their heights and thicknesses, the posts focus different colors, or wavelengths, of light.

A lens focuses light or forms an image by delaying for varying amounts of time the passage of light through different parts of the lens. In curved glass lenses, light takes longer to travel through the thicker parts of the lens than through the thinner parts. On the flat lens, these delays are achieved by the silicon posts, which trap and delay the light for an amount of time that depends on the diameter of the posts. With careful placement of these differently sized posts on the lens, the researchers can guide incident light as it passes through the lens to form a curved wavefront, resulting in a tightly focused spot.

The Caltech researchers found that their flat lenses focus as much as 82 percent of infrared light passing through them. By comparison, previous studies have found that metallic flat lenses have efficiencies of only around a few percent, in part because their materials absorb some incident light.

Although curved glass lenses can focus nearly 100 percent of the light that reaches them, they usually require sophisticated designs with nonspherical surfaces that can be difficult to polish. On the other hand, the design of the flat lenses can be modified depending upon the exact application for which the lenses are needed, simply by changing the pattern of the silicon nanoposts. This flexibility makes them attractive for commercial and industrial use, the researchers say. "You get exceptional freedom to design lenses for different functionalities," says Arbabi.

A limitation of flat lenses is that each lens can only focus a narrow set of wavelengths, representing individual colors of the spectrum. These monochromatic lenses could find application in devices such as a night-vision camera, which sees in infrared over a narrow wavelength range. More broadly, they could be used in any optical device involving lasers, as lasers emit only a single color of light.

Multiple monochromatic lenses could be used to deliver multicolor images, much as television and computer displays employ combinations of the colors red, green, and blue to produce a rainbow of hues. Because the microlenses are so small, integrating them in optical systems would take up little space compared to the curved lenses now utilized in cameras or microscopes.

Although the lenses currently are expensive to manufacture, it should be possible to produce thousands at once using photolithography or nanoimprint lithography techniques, the researches say. In these common, high-throughput manufacturing techniques, a stamp presses into a polymer, leaving behind a desired pattern that is then transferred into silicon through dry etching of silicon in a plasma.

"For consumer applications, the current price point of flat lenses is not good, but the performance is," says Faraon. "Depending on how many of lenses you are making, the price can drop down rapidly."

The paper is entitled "Subwavelength-thick lenses with high numerical apertures and large efficiency based on high-contrast transmitarrays." In addition to Arbabi and Faraon, other Caltech coauthors include graduate student Yu Horie, senior Alexander Ball, and Mahmood Bagheri, a microdevices engineer at JPL. The work was supported by the Caltech/JPL President's and Director's Fund and the Defense Advanced Research Projects Agency. Alexander Ball was supported by a Summer Undergraduate Research Fellowship at Caltech. The device nanofabrication was performed in the Kavli Nanoscience Institute at Caltech.

Frontpage Title: 
New Thin, Flat Lenses
Listing Title: 
New Thin, Flat Lenses
Writer: 
Exclude from News Hub: 
No
Short Title: 
New Thin, Flat Lenses
News Type: 
Research News

Lopsided Star Explosion Holds the Key to Other Supernova Mysteries

New observations of a recently exploded star are confirming supercomputer model predictions made at Caltech that the deaths of stellar giants are lopsided affairs in which debris and the stars' cores hurtle off in opposite directions.

While observing the remnant of supernova (SN) 1987A, NASA's Nuclear Spectroscopic Telescope Array, or NuSTAR, recently detected the unique energy signature of titanium-44, a radioactive version of titanium that is produced during the early stages of a particular type of star explosion, called a Type II, or core-collapse supernova.

"Titanium-44 is unstable. When it decays and turns into calcium, it emits gamma rays at a specific energy, which NuSTAR can detect," says Fiona Harrison, the Benjamin M. Rosen Professor of Physics at Caltech, and NuSTAR's principal investigator.

By analyzing direction-dependent frequency changes—or Doppler shifts—of energy from titanium-44, Harrison and her team discovered that most of the material is moving away from NuSTAR. The finding, detailed in the May 8 issue of the journal Science, is the best proof yet that the mechanism that triggers Type II supernovae is inherently lopsided.

NuSTAR recently created detailed titanium-44 maps of another supernova remnant, called Cassiopeia A, and there too it found signs of an asymmetrical explosion, although the evidence in this case is not as definitive as with 1987A.

Supernova 1987A was first detected in 1987, when light from the explosion of a blue supergiant star located 168,000 light-years away reached Earth. SN 1987A was an important event for astronomers. Not only was it the closest supernova to be detected in hundreds of years, it marked the first time that neutrinos had been detected from an astronomical source other than our sun.

These nearly massless subatomic particles had been predicted to be produced in large quantities during Type II explosions, so their detection during 1987A supported some of the fundamental theories about the inner workings of supernovae.

With the latest NuSTAR observations, 1987A is once again proving to be a useful natural laboratory for studying the mysteries of stellar death. For many years, supercomputer simulations performed at Caltech and elsewhere predicted that the cores of pending Type II supernovae change shape just before exploding, transforming from a perfectly symmetric sphere into a wobbly mass made up of turbulent plumes of extremely hot gas. In fact, models that assumed a perfectly spherical core just fizzled out.

"If you make everything just spherical, the core doesn't explode. It turns out you need asymmetries to make the star explode," Harrison says.

According to the simulations, the shape change is driven by turbulence generated by neutrinos that are absorbed within the core. "This turbulence helps push out a powerful shock wave and launch the explosion," says Christian Ott, a professor of theoretical physics at Caltech who was not involved in the NuSTAR observations.

Ott's team uses supercomputers to run three-dimensional simulations of core-collapse supernovae. Each simulation generates hundreds of terabytes of results—for comparison, the entire print collection of the U.S. Library of Congress is equal to about 10 terabytes—but represents only a few tenths of a second during a supernova explosion.

A better understanding of the asymmetrical nature of Type II supernovae, Ott says, could help solve one of the biggest mysteries surrounding stellar deaths: why some supernovae collapse into neutron stars and others into a black hole to form a space-time singularity. It could be that the high degree of asymmetry in some supernovae produces a dual effect: the star explodes in one direction, while the remainder of the star continues to collapse in all other directions.

"In this way, an explosion could happen, but eventually leave behind a black hole and not a neutron star," Ott says.

The NuSTAR findings also increase the chances that Advanced LIGO—the upgraded version of the Laser Interferometer Gravitational-wave Observatory, which will begin to take data later this year—will be successful in detecting gravitational waves from supernovae. Gravitational waves are ripples that propagate through the fabric of space-time. According to theory, Type II supernovae should emit gravitational waves, but only if the explosions are asymmetrical.

Harrison and Ott have plans to combine the observational and theoretical studies of supernova that until now have been occurring along parallel tracks at Caltech, using the NuSTAR observations to refine supercomputer simulations of supernova explosions.

"The two of us are going to work together to try to get the models to more accurately predict what we're seeing in 1987A and Cassiopeia A," Harrison says.

Additional Caltech coauthors of the paper, entitled "44Ti gamma-ray emission lines from SN1987A reveal an asymmetric explosion," are Hiromasa Miyasaka, Brian Grefenstette, Kristin Madsen, Peter Mao, and Vikram Rana. The research was supported by funding from NASA, the French National Center for Space Studies (CNES), the Japan Society for the Promotion of Science, and the Technical University of Denmark.

This article also references the paper "Magnetorotational Core-collapse Supernovae in Three Dimensions," which appeared in the April 20, 2014, issue of Astrophysical Journal Letters.

Frontpage Title: 
NuSTAR Observations Hold Key to Supernova Mysteries
Listing Title: 
NuSTAR Observations Hold Key to Supernova Mysteries
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news