Three Caltech Fulbrights

Caltech seniors Jonathan Liu, Charles Tschirhart, and Caroline Werlang will be engaging in research abroad as Fulbright Scholars this fall. Sponsored by the Department of State's Bureau of Educational and Cultural Affairs, the Fulbright Program was established in 1946 to honor the late Senator J. William Fulbright of Arkansas for his contributions to fostering international understanding.

 

 

Jonathan Liu is an applied physics major from Pleasanton, California, who will be doing research at Ludwig Maximilian University Munich in Germany. He plans to work with a biophysicist studying how DNA moves in a liquid with a thermal gradient, which could shed light on the molecular origins of life. Long strands of DNA should break apart well before they have time to organize themselves into the complicated arrangements needed to be self-reproducing, but previous work in the lab Liu is joining has hinted that deep-sea hydrothermal vents may have allowed long strands to form stable clusters. Liu plans to enroll at UC Berkeley for graduate study in physics at the PhD level on his return; he was awarded one of UC Berkley's Graduate Student Instructorships to support his work.

Charles Tschirhart of Naperville, Illinois, is a double major in applied physics and chemistry. He will be studying condensed matter physics at the University of Nottingham, England, where he plans to develop new ways to "photograph" nanometer-sized (billionth-of-a-meter-sized) objects using atomic force microscopy. He will then proceed to UC Santa Barbara to earn a PhD in experimental condensed matter physics. Charles has won both a Hertz fellowship and National Science Foundation Graduate Research Fellowship; both will support his PhD work at UC Santa Barbara.

Caroline Werlang, a chemical engineering student from Houston, Texas, will go to the Institute of Bioengineering at the École Polytechnique Fédérale de Lausanne in Switzerland to work on kinases, which are proteins that act as molecular "on/off" switches. She will join a lab that is trying to determine how kinases select and bind to their targets in order to initiate or block other biological processes—an important step toward designing a synthetic kinase that could activate a tumor-suppressor protein, for example. After her Fulbright, she will pursue a doctorate in biological engineering at MIT. Caroline's PhD studies will be supported by a National Science Foundation Graduate Fellowship.

The Fulbright Program is the flagship international exchange program sponsored by the U.S. government. Seniors and graduate students who compete in the U.S. Fulbright Student Program can apply to one of the more than 160 countries whose universities are willing to host Fulbright Scholars. For the academic program, which sponsors one academic year of study or research abroad after the bachelor's degree, each applicant must submit a plan of research or study, a personal essay, three academic references, and a transcript that demonstrates a record of outstanding academic work.

Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Yeast Protein Network Could Provide Insights into Human Obesity

A team of biologists and a mathematician have identified and characterized a network composed of 94 proteins that work together to regulate fat storage in yeast.

"Removal of any one of the proteins results in an increase in cellular fat content, which is analogous to obesity," says study coauthor Bader Al-Anzi, a research scientist at Caltech.

The findings, detailed in the May issue of the journal PLOS Computational Biology, suggest that yeast could serve as a valuable test organism for studying human obesity.

"Many of the proteins we identified have mammalian counterparts, but detailed examinations of their role in humans has been challenging," says Al-Anzi. "The obesity research field would benefit greatly if a single-cell model organism such as yeast could be used—one that can be analyzed using easy, fast, and affordable methods."

Using genetic tools, Al-Anzi and his research assistant Patrick Arpp screened a collection of about 5,000 different mutant yeast strains and identified 94 genes that, when removed, produced yeast with increases in fat content, as measured by quantitating fat bands on thin-layer chromatography plates. Other studies have shown that such "obese" yeast cells grow more slowly than normal, an indication that in yeast as in humans, too much fat accumulation is not a good thing. "A yeast cell that uses most of its energy to synthesize fat that is not needed does so at the expense of other critical functions, and that ultimately slows down its growth and reproduction," Al-Anzi says.

When the team looked at the protein products of the genes, they discovered that those proteins are physically bonded to one another to form an extensive, highly clustered network within the cell.

Such a configuration cannot be generated through a random process, say study coauthors Sherif Gerges, a bioinformatician at Princeton University, and Noah Olsman, a graduate student in Caltech's Division of Engineering and Applied Science, who independently evaluated the details of the network. Both concluded that the network must have formed as the result of evolutionary selection.

In human-scale networks, such as the Internet, power grids, and social networks, the most influential or critical nodes are often, but not always, those that are the most highly connected.

The team wondered whether the fat-storage network exhibits this feature, and, if not, whether some other characteristics of the nodes would determine which ones were most critical. Then, they could ask if removing the genes that encode the most critical nodes would have the largest effect on fat content.

To examine this hypothesis further, Al-Anzi sought out the help of a mathematician familiar with graph theory, the branch of mathematics that considers the structure of nodes connected by edges, or pathways. "When I realized I needed help, I closed my laptop and went across campus to the mathematics department at Caltech," Al-Anzi recalls. "I walked into the only office door that was open at the time, and introduced myself."

The mathematician that Al-Anzi found that day was Christopher Ormerod, a Taussky–Todd Instructor in Mathematics at Caltech. Al-Anzi's data piqued Ormerod's curiosity. "I was especially struck by the fact that connections between the proteins in the network didn't appear to be random," says Ormerod, who is also a coauthor on the study. "I suspected there was something mathematically interesting happening in this network."

With the help of Ormerod, the team created a computer model that suggested the yeast fat network exhibits what is known as the small-world property. This is akin to a social network that contains many different local clusters of people who are linked to each other by mutual acquaintances, so that any person within the cluster can be reached via another person through a small number of steps.

This pattern is also seen in a well-known network model in graph theory, called the Watts-Strogatz model. The model was originally devised to explain the clustering phenomenon often observed in real networks, but had not previously been applied to cellular networks.

Ormerod suggested that graph theory might be used to make predictions that could be experimentally proven. For example, graph theory says that the most important nodes in the network are not necessarily the ones with the most connections, but rather those that have the most high-quality connections. In particular, nodes having many distant or circuitous connections are less important than those with more direct connections to other nodes, and, especially, direct connections to other important nodes. In mathematical jargon, these important nodes are said to have a high "centrality score."

"In network analysis, the centrality of a node serves as an indicator of its importance to the overall network," Ormerod says.

"Our work predicts that changing the proteins with the highest centrality scores will have a bigger effect on network output than average," he adds. And indeed, the researchers found that the removal of proteins with the highest predicted centrality scores produced yeast cells with a larger fat band than in yeast whose less-important proteins had been removed.

The use of centrality scores to gauge the relative importance of a protein in a cellular network is a marked departure from how proteins traditionally have been viewed and studied—that is, as lone players, whose characteristics are individually assessed. "It was a very local view of how cells functioned," Al-Anzi says. "Now we're realizing that the majority of proteins are parts of signaling networks that perform specific tasks within the cell."

Moving forward, the researchers think their technique could be applicable to protein networks that control other cellular functions—such as abnormal cell division, which can lead to cancer.

"These kinds of methods might allow researchers to determine which proteins are most important to study in order to understand diseases that arise when these functions are disrupted," says Kai Zinn, a professor of biology at Caltech and the study's senior author. "For example, defects in the control of cell growth and division can lead to cancer, and one might be able to use centrality scores to identify key proteins that regulate these processes. These might be proteins that had been overlooked in the past, and they could represent new targets for drug development."

Funding support for the paper, "Experimental and Computational Analysis of a Large Protein Network That Controls Fat Storage Reveals the Design Principles of a Signaling Network," was provided by the National Institutes of Health.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Gravitational Waves—Sooner Than Later?

Built to look for gravitational waves, the ripples in the fabric of space itself that were predicted by Einstein in 1916, the Laser Interferometer Gravitational-Wave Observatory (LIGO) is the most ambitious project ever funded by the National Science Foundation. LIGO consists of two L-shaped interferometers with four-kilometer-long arms; at their ends hang mirrors whose motions are measured to within one-thousandth the diameter of a proton. Managed jointly by Caltech and MIT, Initial LIGO became operational in 2001; the second-generation Advanced LIGO was dedicated on May 19.

Barry Barish is the Roland and Maxine Linde Professor of Physics, Emeritus. He was LIGO's principal investigator from 1994 to 1997, and director from 1997 to 2006. Stan Whitcomb (BS '73) was an assistant professor of physics at Caltech from 1980 to 1985. He returned to campus as a member of the professional staff in 1991 and has served the LIGO project in various capacities ever since. We talked with each of them about how LIGO came to be.

 

Q: How did LIGO get started?

BARISH: Einstein didn't think that gravitational waves could ever be detected, because gravity is such a weak force. But in the 1960s, Joseph Weber at the University of Maryland turned a metric ton of aluminum into a bar 153 centimeters long. The bar naturally rang at a frequency of about 1,000 Hertz. A collapsing supernova should produce gravitational waves in that frequency range, so if such a wave passed through the bar, the bar's resonance might amplify it enough to be measurable. It was a neat idea, and basically initiated the field experimentally. But you can only make a bar so big, and the signal you see depends on the size of the detector.

[Professor of Physics, Emeritus] Ron Drever, whom we recruited from the University of Glasgow, had started out working on bar detectors. But when we hired him, he and Rainer [Rai] Weiss at MIT were independently developing interferometer-type detectors—a concept previously suggested by others. Usually you fasten an interferometer's mirrors down tightly so they keep their alignment, but LIGO's mirrors have to be free to swing so that the gravitational waves can move them. It's very difficult to do incredibly precise things with big, heavy masses that want to move around.

 

WHITCOMB: Although bar detectors were by far the most sensitive technology at the time, it appeared that they would have a much harder path reaching the sensitivity they would ultimately need. Kip Thorne [BS '62, Richard P. Feynman Professor of Theoretical Physics, Emeritus] was really instrumental in getting Caltech to jump into interferometer technology and to try to bring that along.

Ron's group at Glasgow had built a 10-meter interferometer, which was all the space they had. We built a 40-meter version largely based on their designs, but trying to improve them where possible. In those days we were working with argon-ion lasers, which were the best available, but very cantankerous. Their cooling water introduced a lot of vibrational noise into the system, making it difficult to reach the sensitivity we needed. We were also developing the control systems, which in those days had to be done with analog electronics. And we had some of the first "supermirrors," which were actually military technology that we were able to get released for scientific use. The longer the interferometer's arms, the smaller the displacements it can measure, and the effective length is the cumulative distance the light travels. We bounce the light back and forth hundreds of times, essentially making the interferometer several thousand kilometers long.

 

Q: When did the formal collaboration with MIT begin?

BARISH: Rai [Weiss] and Ron [Drever] were running their own projects at MIT and Caltech, respectively, until [R. Stanton Avery Distinguished Service Professor and Professor of Physics, Emeritus] Robbie Vogt, Caltech's provost, brought them together. They had very different ways of approaching the world, but Robbie somehow pulled what was needed out of both of them.

Robbie spearheaded the proposal that was submitted to the National Science Foundation in 1989. That two-volume, nearly 300-page document contained the underpinnings—the key ideas, technologies, and concepts that we use in LIGO today. A lot of details are different, a lot of things have been invented, but basically even the dimensions are much the same.

 

WHITCOMB: When I returned in 1991, LIGO had become a joint Caltech/MIT project with a single director, Robbie Vogt. Robbie had brought in a set of engineers, many borrowed or recruited from JPL, to do the designs. The late Boude Moore [BS '48 EE, MS '49 EE], our vacuum engineer, was figuring out how to make LIGO's high-vacuum systems out of low-hydrogen-outgassing stainless steel. This had never been done before. Hydrogen atoms absorbed in the metal slowly leak out over the life of the system, but our measurements are so precise that stray atoms hitting the mirrors would ruin the data. Boude was doing some relatively large-scale tests, mostly in the synchrotron building, but we also built a test cylinder 80 meters long near Caltech's football field, behind the gym.

So all of these tests were going on piecemeal at different places, and at the 40-meter interferometer we brought it all together. We were still mostly using analog electronics, but we had a new vacuum system, we redid all the suspension systems, we added several new features to the detector, and we had attained the sensitivity we were going to need for the full-sized, four-kilometer LIGO detectors.

And at the same time, in 1991, we got word that the full-scale project had been approved.

 

Q: How were the sites in Hanford, Washington, and Livingston, Louisiana, selected?

WHITCOMB: I cochaired the site-evaluation committee with LIGO's chief engineer, [Member of the Professional Staff] Bill Althouse. We visited most of the potential sites, evaluated them, and recommended a set of best site pairs to NSF. We had several sets of criteria. The engineering criteria included how level the site was, how stable it was against things like frost heaves, how much road would need to be built, and the overall cost of construction. We had criteria about proximity to people, and to noise sources like airports and railroads. We also had scientific criteria. For example, we wanted the two sites to be as far apart in the U.S. as you could reasonably get. We also wanted LIGO to work well with either of the two proposed European detectors—GEO [in Hanover, Germany] and Virgo [in Tuscany, Italy]. We needed to be able to triangulate a source's position on the sky, so we did not want LIGO's sites to form a line with either of them.

 

Q: What makes Advanced LIGO more sensitive?

BARISH: Well, it's complicated. Most very sensitive physics experiments get limited by some source of background noise, so you concentrate on that thing and figure out how to beat it down. But LIGO has three limits. We are looking for gravitational waves over a range of frequencies from 10 Hertz to 10 kilohertz. Our planet is incredibly noisy seismically, so from 10 Hertz to about 100 Hertz we have to isolate ourselves from that shaking. And at very high frequencies, we have to sample fast enough to see the signal, so we're limited by the laser's power, which determines the number of photons we can sample in a short amount of time. And in the middle frequencies, we're limited by what we call "thermal noise"—the atoms in the mirrors moving around, and so forth.

Advanced LIGO has a very much more powerful laser to take care of the high frequencies. It has much fancier isolation systems, including active feedback systems. And we have bigger test masses with better mirror coatings to minimize the thermal background. All of these improvements were in the 1989 proposal, which called for Initial LIGO to be built with proven techniques that had mostly been tested here on campus in the 40-meter prototype; followed by Advanced LIGO, to be built using techniques we would test in the 40-meter after Initial LIGO went operational. And now we're using the 40-meter lab to develop and test the next round of upgrades.

 

Q: How close do you think we are to a detection?

BARISH: I've always had the fond wish that we'd do it by 2016, which is the hundredth anniversary of Einstein's theory. Advanced LIGO may take three to five years to reach the designed sensitivity, but we'll be taking data along the way, so the probability of a detection will be continually increasing. Our sensitivity is designed to improve by a factor of 10 to 20, and a factor of 10 increases the detection probability by a factor of 1,000. The sensitivity tells you how far out you can see, and volume increases with the cube of the distance.

When we started this back in 1989, some people were a bit skeptical, saying maybe it's a little bit like fusion. They always say fusion is "50 years away." With LIGO the common lore is we are 10 years away from detecting gravitational waves. I would say that it's not 10 years any longer. It's probably within five.

Writer: 
Douglas Smith
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community
Tuesday, May 26, 2015 to Friday, May 29, 2015
Center for Student Services 360 (Workshop Space) – Center for Student Services

CTLO Presents Ed Talk Week 2015

Ditch Day? It’s Today, Frosh!

Today we celebrate Ditch Day, one of Caltech's oldest traditions. During this annual spring rite—the timing of which is kept secret until the last minute—seniors ditch their classes and vanish from campus. Before they go, however, they leave behind complex, carefully planned out puzzles and challenges—known as "stacks"—designed to occupy the underclassmen and prevent them from wreaking havoc on the seniors' unoccupied rooms.

Follow the action on Caltech's Facebook, Twitter, and Instagram pages as the undergraduates tackle the puzzles left for them to solve around campus. Join the conversation by sharing your favorite Ditch Day memories and using #CaltechDitchDay in your tweets and postings.

Frontpage Title: 
Ditch Day 2015
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Caltech Astronomers Observe a Supernova Colliding with Its Companion Star

Type Ia supernovae, one of the most dazzling phenomena in the universe, are produced when small dense stars called white dwarfs explode with ferocious intensity. At their peak, these supernovae can outshine an entire galaxy. Although thousands of supernovae of this kind were found in the last decades, the process by which a white dwarf becomes one has been unclear.

That began to change on May 3, 2014, when a team of Caltech astronomers working on a robotic observing system known as the intermediate Palomar Transient Factory (iPTF)—a multi-institute collaboration led by Shrinivas Kulkarni, the John D. and Catherine T. MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories—discovered a Type Ia supernova, designated iPTF14atg, in nearby galaxy IC831, located 300 million light-years away.

The data that were immediately collected by the iPTF team lend support to one of two competing theories about the origin of white dwarf supernovae, and also suggest the possibility that there are actually two distinct populations of this type of supernova.

The details are outlined in a paper with Caltech graduate student Yi Cao the lead author, appearing May 21 in the journal Nature.

Type Ia supernovae are known as "standardizable candles" because they allow astronomers to gauge cosmic distances by how dim they appear relative to how bright they actually are. It is like knowing that, from one mile away, a light bulb looks 100 times dimmer than another located only one-tenth of a mile away. This consistency is what made these stellar objects instrumental in measuring the accelerating expansion of the universe in the 1990s, earning three scientists the Nobel Prize in Physics in 2011.

There are two competing origin theories, both starting with the same general scenario: the white dwarf that eventually explodes is one of a pair of stars orbiting around a common center of mass. The interaction between these two stars, the theories say, is responsible for triggering supernova development. What is the nature of that interaction? At this point, the theories diverge.

According to one theory, the so-called double-degenerate model, the companion to the exploding white dwarf is also a white dwarf, and the supernova explosion initiates when the two similar objects merge.

However, in the second theory, called the single-degenerate model, the second star is instead a sunlike star—or even a red giant, a much larger type of star. In this model, the white dwarf's powerful gravity pulls, or accretes, material from the second star. This process, in turn, increases the temperature and pressure in the center of the white dwarf until a runaway nuclear reaction begins, ending in a dramatic explosion.

The difficulty in determining which model is correct stems from the facts that supernova events are very rare—occurring about once every few centuries in our galaxy—and that the stars involved are very dim before the explosions.

That is where the iPTF comes in. From atop Palomar Mountain in Southern California, where it is mounted on the 48-inch Samuel Oschin Telescope, the project's fully automated camera optically surveys roughly 1000 square degrees of sky per night (approximately 1/20th of the visible sky above the horizon), looking for transients—objects, including Type Ia supernovae, whose brightness changes over timescales that range from hours to days.

On May 3, the iPTF took images of IC831 and transmitted the data for analysis to computers at the National Energy Research Scientific Computing Center, where a machine-learning algorithm analyzed the images and prioritized real celestial objects over digital artifacts. Because this first-pass analysis occurred when it was nighttime in the United States but daytime in Europe, the iPTF's European and Israeli collaborators were the first to sift through the prioritized objects, looking for intriguing signals. After they spotted the possible supernova—a signal that had not been visible in the images taken just the night before—the European and Israeli team alerted their U.S. counterparts, including Caltech graduate student and iPTF team member Yi Cao.

Cao and his colleagues then mobilized both ground- and space-based telescopes, including NASA's Swift satellite, which observes ultraviolet (UV) light, to take a closer look at the young supernova.

"My colleagues and I spent many sleepless nights on designing our system to search for luminous ultraviolet emission from baby Type Ia supernovae," says Cao. "As you can imagine, I was fired up when I first saw a bright spot at the location of this supernova in the ultraviolet image. I knew this was likely what we had been hoping for."

UV radiation has higher energy than visible light, so it is particularly suited to observing very hot objects like supernovae (although such observations are possible only from space, because Earth's atmosphere and ozone later absorbs almost all of this incoming UV). Swift measured a pulse of UV radiation that declined initially but then rose as the supernova brightened. Because such a pulse is short-lived, it can be missed by surveys that scan the sky less frequently than does the iPTF.

This observed ultraviolet pulse is consistent with a formation scenario in which the material ejected from a supernova explosion slams into a companion star, generating a shock wave that ignites the surrounding material. In other words, the data are in agreement with the single-degenerate model.

Back in 2010, Daniel Kasen, an associate professor of astronomy and physics at UC Berkeley and Lawrence Berkeley National Laboratory, used theoretical calculations and supercomputer simulations to predict just such a pulse from supernova-companion collisions. "After I made that prediction, a lot of people tried to look for that signature," Kasen says. "This is the first time that anyone has seen it. It opens up an entirely new way to study the origins of exploding stars."

According to Kulkarni, the discovery "provides direct evidence for the existence of a companion star in a Type Ia supernova, and demonstrates that at least some Type Ia supernovae originate from the single-degenerate channel."

Although the data from supernova iPTF14atg support it being made by a single-degenerate system, other Type Ia supernovae may result from double-degenerate systems. In fact, observations in 2011 of SN2011fe, another Type Ia supernova discovered in the nearby galaxy Messier 101 by PTF (the precursor to the iPTF), appeared to rule out the single-degenerate model for that particular supernova. And that means that both theories actually may be valid, says Caltech professor of theoretical astrophysics Sterl Phinney, who was not involved in the research. "The news is that it seems that both sets of theoretical models are right, and there are two very different kinds of Type Ia supernovae."

"Both rapid discovery of supernovae in their infancy by iPTF, and rapid follow-up by the Swift satellite, were essential to unveil the companion to this exploding white dwarf. Now we have to do this again and again to determine the fractions of Type Ia supernovae akin to different origin theories," says iPTF team member Mansi Kasliwal, who will join the Caltech astronomy faculty as an assistant professor in September 2015.

The iPTF project is a scientific collaboration between Caltech; Los Alamos National Laboratory; the University of Wisconsin–Milwaukee; the Oskar Klein Centre in Sweden; the Weizmann Institute of Science in Israel; the TANGO Program of the University System of Taiwan; and the Kavli Institute for the Physics and Mathematics of the Universe in Japan. The Caltech team is funded in part by the National Science Foundation.

Frontpage Title: 
Caltech Astronomers See Supernova Collide with Companion Star
Listing Title: 
Caltech Astronomers See Supernova Collide with Companion Star
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Dedication of Advanced LIGO

The Advanced LIGO Project, a major upgrade that will increase the sensitivity of the Laser Interferometer Gravitational-wave Observatories instruments by a factor of 10 and provide a 1,000-fold increase in the number of astrophysical candidates for gravitational wave signals, was officially dedicated today in a ceremony held at the LIGO Hanford facility in Richland, Washington.

LIGO was designed and is operated by Caltech and MIT, with funding from the National Science Foundation (NSF). Advanced LIGO, also funded by the NSF, will begin its first searches for gravitational waves in the fall of this year.

The dedication ceremony featured remarks from Caltech president Thomas F. Rosenbaum, the Sonja and William Davidow Presidential Chair and professor of physics; Professor of Physics Tom Soifer (BS '68), the Kent and Joyce Kresa Leadership Chair of Caltech's Division of Physics, Mathematics and Astronomy; and NSF director France Córdova (PhD '79).

"We've spent the past seven years putting together the most sensitive gravitational-wave detector ever built. Commissioning the detectors has gone extremely well thus far, and we are looking forward to our first science run with Advanced LIGO beginning later in 2015.  This is a very exciting time for the field," says Caltech's David H. Reitze, executive director of the LIGO Project.

"Advanced LIGO represents a critically important step forward in our continuing effort to understand the extraordinary mysteries of our universe," says Córdova. "It gives scientists a highly sophisticated instrument for detecting gravitational waves, which we believe carry with them information about their dynamic origins and about the nature of gravity that cannot be obtained by conventional astronomical tools."

"This is a particularly thrilling event, marking the opening of a new window on the universe, one that will allow us to see the final cataclysmic moments in the lives of stars that would otherwise be invisible to us," says Soifer.

Predicted by Albert Einstein in 1916 as a consequence of his general theory of relativity, gravitational waves are ripples in the fabric of space and time produced by violent events in the distant universe—for example, by the collision of two black holes or by the cores of supernova explosions. Gravitational waves are emitted by accelerating masses much in the same way as radio waves are produced by accelerating charges, such as electrons in antennas. As they travel to Earth, these ripples in the space-time fabric bring with them information about their violent origins and about the nature of gravity that cannot be obtained by other astronomical tools.

Although they have not yet been detected directly, the influence of gravitational waves on a binary pulsar system (two neutron stars orbiting each other) has been measured accurately and is in excellent agreement with the predictions. Scientists therefore have great confidence that gravitational waves exist. But a direct detection will confirm Einstein's vision of the waves and allow a fascinating new window into cataclysms in the cosmos.

LIGO was originally proposed as a means of detecting these gravitational waves. Each of the 4-km-long L-shaped LIGO interferometers (one each at LIGO Hanford and at the LIGO observatory in Livingston, Louisiana) use a laser split into two beams that travel back and forth down long arms (which are beam tubes from which the air has been evacuated). The beams are used to monitor the distance between precisely configured mirrors. According to Einstein's theory, the relative distance between the mirrors will change very slightly when a gravitational wave passes by. 

The original configuration of LIGO was sensitive enough to detect a change in the lengths of the 4-km arms by a distance one-thousandth the size of a proton; this is like accurately measuring the distance from Earth to the nearest star—3 light years—to within the width of a human hair. Advanced LIGO, which will utilize the infrastructure of LIGO, will be 10 times more sensitive.

Included in the upgrade were changes in the lasers (180-watt highly stabilized systems), optics (40-kg fused-silica "test mass" mirrors suspended by fused-silica fibers), seismic isolation systems (using inertial sensing and feedback), and in how the microscopic motion (less than one billionth of one billionth of a meter) of the test masses is detected.

The change of more than a factor of 10 in sensitivity also comes with a significant increase in the sensitive frequency range. This will allow Advanced LIGO to look at the last minutes of the life of pairs of massive black holes as they spiral closer, coalesce into one larger black hole, and then vibrate much like two soap bubbles becoming one. It will also allow the instrument to pinpoint periodic signals from the many known pulsars that radiate in the range from 500 to 1,000 Hertz (frequencies that correspond to high notes on an organ).

Advanced LIGO will also be used to search for the gravitational cosmic background—allowing tests of theories about the development of the universe only 10-35 seconds after the Big Bang.

LIGO research is carried out by the LIGO Scientific Collaboration (LSC), a group of some 950 scientists at universities around the United States and in 15 other countries. The LSC network includes the LIGO interferometers and the GEO600 interferometer, located near Hannover, Germany, and theand and the LSC works jointly with the Virgo Collaboration—which designed and constructed the 3-km-long Virgo interferometer located in Cascina, Italy—to analyze data from the LIGO, GEO, and Virgo interferometers.

Several international partners including the Max Planck Institute for Gravitational Physics, the Albert Einstein Institute, the Laser Zentrum Hannover, and the Leibniz Universität Hannover in Germany; an Australian consortium of universities, led by the Australian National University and the University of Adelaide, and supported by the Australian Research Council; partners in the United Kingdom funded by the Science and Technology Facilities Council; and the University of Florida and Columbia University, provided significant contributions of equipment, labor, and expertise.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech Students Named Goldwater Scholars

Two Caltech students, Saaket Agrawal and Paul Dieterle, have been awarded Barry M. Goldwater scholarships for the 2015–16 academic year.

The Barry Goldwater Scholarship and Excellence in Education Program was established by Congress in 1986 to award scholarships to college students who intend to pursue research careers in science, mathematics, and engineering.

Saaket Agrawal is a sophomore from El Dorado Hills, California, majoring in chemistry. Under Greg Fu, the Altair Professor of Chemistry, Agrawal works on nickel-catalyzed cross coupling, a powerful method for making carbon-carbon bonds. Specifically, Agrawal conducts mechanistic studies on these reactions, which involves elucidating the pathway through which they occur. After Caltech, he plans to pursue a PhD research program in organometallic chemistry—the combination of organic (carbon-based) and inorganic chemistry—and ultimately hopes teach at the university level.

"Caltech is one of the best places in the world to study chemistry. The faculty were so willing to take me on, even as an undergrad, and treat me like a capable scientist," Agrawal says. "That respect, and the ability to do meaningful work, has motivated me."

Paul Dieterle is a junior from Madison, Wisconsin, majoring in applied physics. He works with Oskar Painter, the John G. Braun Professor of Applied Physics, studying quantum information science.

"The quantum behavior of atoms has been studied for decades. We are researching the way macroscopic objects behave in a quantum mechanical way in order to manipulate them into specific quantum states," Dieterle says. Painter's group is studying how to use macroscopic mechanical objects to transform quantized electrical signals into quantized optical signals as part of the larger field of quantum computing, a potential next generation development in the field.

"The power of quantum computing would be immense," says Dieterle, who would like to attend graduate school to study quantum information science. "We could simulate incredibly complex things, like particles at the edge of a black hole. Participating in this physics revolution is so exciting."

Agrawal and Dieterle bring the number of Caltech Goldwater Scholars to 22 in the last decade.

Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

The Planet Finder: A Conversation with Dimitri Mawet

Associate Professor of Astronomy Dimitri Mawet has joined Caltech from the Paranal Observatory in Chile, where he was a staff astronomer for the Very Large Telescope. After earning his PhD at the University of Liège, Belgium, in 2006, he was at JPL from 2007 to 2011—first as a NASA postdoctoral scholar and then as a research scientist.

 

Q: What do you do?

A: I study exoplanets, which are planets orbiting other stars. In particular, I'm developing technologies to view exoplanets directly and analyze their atmospheres. We're hunting for small, Earth-like planets where life might exist—in other words, planets that get just the right amount of heat to maintain water in its liquid state—but we're not there yet. For an exoplanet to be imaged right now, it has to be really big and really bright, which means it's very hot.

In order to be seen in the glare of its star, the planet has to be beyond a minimum angular separation called the inner working angle. Separations can also be expressed in astronomical units, or AUs, where one AU is the mean distance between the sun and Earth. Right now we can get down to about two AU—but only for giant planets. For example, we recently imaged Beta Pictoris and HR 8799. We didn't find anything at two AU in either star system, but we found that Beta Pictoris harbors a planet about eight times more massive than Jupiter orbiting at 9 AU. And we see a family of four planets in the five- to seven-Jupiters range that orbit from 14 to 68 AU around HR 8799. For comparison, Saturn is 9.5 AU from the sun, and Neptune is 30 AU.

 

Q: How can we narrow the working angle?

A: You either build an interferometer, which blends the light from two or more telescopes and "nulls out" the star, or you build a coronagraph, which blots out the star's light. Most coronagraphs block the star's image by putting a physical mask in the optical path. The laws of physics say their inner working angles can't be less than the so-called diffraction limit, and most coronagraphs work at three to five times that. However, when I was a grad student, I invented a coronagraph that works at the diffraction limit.

The key is that we don't use a physical mask. Instead, we create an "optical vortex" that expels the star's light from the instrument. Some of our vortex masks are made from liquid-crystal polymers, similar to your smartphone's display, except that the molecules are "frozen" into orientations that force light waves passing through the center of the mask to emerge in different phase states simultaneously. This is not something nature allows, so the light's energy is nulled out, creating a "dark hole."

If we point the telescope so the star's image lands exactly on the vortex, its light will be filtered out, but any light that's not perfectly centered on the vortex—such as light from the planets, or from a dust disk around the star—will be slightly off-axis and will go on through to the detector.

We're also pushing to overcome the enormous contrast ratio between the very bright star and the much dimmer planet. Getting down to the Earth-like regime requires a contrast ratio of 10 billion to 1, which is really huge. The best contrast ratios achieved on ground-based telescopes today are more like 1,000,000 to 1. So we need to pump it up by another factor of 10,000.

Even so, we can do a lot of comparative exoplanetology, studying any and all kinds of planets in as many star systems as we can. The variety of objects around other stars—and within our own solar system—is mind-boggling. We are discovering totally unexpected things.

 

Q: Such as?

A: Twenty years ago, people were surprised to discover hot Jupiters, which are huge, gaseous planets that orbit extremely close to their stars—as close as 0.04 AU, or one-tenth the distance between the sun and Mercury. We have nothing like them in our solar system. They were discovered indirectly, by the wobble they imparted to their star or the dimming of their star's light as the planet passed across the line of sight. But now, with high-contrast imaging, we can actually see—directly—systems of equally massive planets that orbit tens or even hundreds of AU's away from their stars, which is baffling.

Planets form within circumstellar disks of dust and gas, but these disks get very tenuous as you go farther from the star. So how did these planets form? One hypothesis is that they formed where we see them, and thus represent failed attempts to become multiple star systems. Another hypothesis is that they formed close to the star, where the disk is more massive, and eventually expelled one another by gravitational interactions.

We're trying to answer that question by starting at the outskirts of these planetary systems, looking for massive, hot planets in the early stages of formation, and then grind our way into the inner reaches of older planetary systems as we learn to reduce the working angle and deal with ever more daunting contrast ratios. Eventually, we will be able to trace the complete history of planetary formation.

 

Q: How can you figure out the history?

Once we see the planet, once we have its signal in our hands, so to speak, we can do all kinds of very cool measurements. We can measure its position, that's called astrometry; we can measure its brightness, which is photometry; and, if we have enough signal, we can sort the light into its wavelengths and do spectroscopy.

As you repeat the astrometry measurements over time, you resolve the planet's orbit by following its motion around its star. You can work out masses, calculate the system's stability. If you add the time axis to spectrophotometry, you can begin to track atmospheric features and measure the planet's rotation, which is even more amazing.

Soon we'll be able to do what we call Doppler imaging, which will allow us to actually map the surface of the planet. We'll be able to resolve planetary weather phenomena. That's already been done for brown dwarfs, which are easier to observe than exoplanets. The next generation of adaptive optics on really big telescopes like the Thirty Meter Telescope should get us down to planetary-mass objects.

That's why I'm so excited about high-contrast imaging, even though it's so very, very hard to do. Most of what we know about exoplanets has been inferred. Direct imaging will tell us so much more about exoplanets—what they are made out of and how they form, evolve, and interact with their surroundings.

 

Q: Growing up, did you always want to be an astronomer?

A: No. I wanted to get into space sciences—rockets, satellite testing, things like that. I grew up in Belgium and studied engineering at the University of Liège, which runs the European Space Agency's biggest testing facility, the Space Center of Liège. I had planned to do my master's thesis there, but there were no openings the year I got my diploma.

I was not considering a thesis in astronomy, but I nevertheless went back to campus, to the astrophysics department. I knew some of the professors because I had taken courses with them. One of them, Jean Surdej, suggested that I work on a concept called the Four-Quadrant Phase-Mask (FQPM) coronagraph, which had been invented by French astronomer Daniel Rouan. I had been a bit hopeless, thinking I would not find a project I would like, but Surdej changed my life that day.

The FQPM was one of the first coronagraphs designed for very-small-working-angle imaging of extrasolar planets. These devices performed well in the lab, but had not yet been adapted for use on telescopes. Jean, and later on Daniel, asked me to help build two FQPMs—one for the "planet finder" on the European Southern Observatory's Very Large Telescope, or VLT, in Chile; and one for the Mid-Infrared Instrument that will fly on the James Webb Space Telescope, which is being built to replace the Hubble Space Telescope.

I spent many hours in Liège's Hololab, their holographic laboratory, playing with photoresists and lasers. It really forged my sense of what the technology could do. And along the way, I came up with the idea for the optical vortex.

Then I went to JPL as a NASA postdoc with Eugene Serabyn. I still spent my time in the lab, but now I was testing things in the High Contrast Imaging Testbed, which is the ultimate facility anywhere in the world for testing coronagraphs. It has a vacuum tank, six feet in diameter and eight feet long, and inside the tank is an optical table with a state-of-the-art deformable mirror. I got a few bruises crawling around in the tank setting up the vortex masks and installing and aligning the optics.

The first vortex coronagraph actually used on the night sky was the one we installed on the 200-inch Hale Telescope down at Palomar Observatory. The Hale's adaptive optics enabled us to image the planets around HR 8799, as well as brown dwarfs, circumstellar disks, and binary star systems. That was a fantastic and fun learning experience.

So I developed my physics and manufacturing intuition in Liège, my experimental and observational skills at JPL, and then I went to Paranal where I actually applied my research. I spent about 400 nights observing at the VLT; I installed two new vortex coronagraphs with my Liège collaborators; and I became the instrument scientist for SPHERE, to which I had contributed 10 years before when it was called the planet finder. And I learned how a major observatory operates—the ins and outs of scheduling, and all the vital jobs that are performed by huge teams of engineers. They far outnumber the astronomers, and nothing would function without them.

And now I am super excited to be here. Caltech and JPL have so many divisions and departments and satellites—like Caltech's Division of Physics, Mathematics and Astronomy and JPL's Science Division, both my new professional homes, but also Caltech's Division of Geology and Planetary Sciences, the NASA Exoplanet Science Institute, the Infrared Processing and Analysis Center, etc. We are well-connected to the University of California. There are so many bridges to build between all these places, and synergies to benefit from. This is really a central place for innovation. I think, for me, that this is definitely the center of the world.

Writer: 
Douglas Smith
Writer: 
Exclude from News Hub: 
No
Short Title: 
The Planet Finder
News Type: 
Research News

Scoville Awarded Radio Astronomy Lectureship

Nick Scoville, the Francis L. Moseley Professor of Astronomy, has been awarded the 2015 Karl G. Jansky Lectureship from the National Radio Astronomy Observatory (NRAO) and the Associated Universities, Inc. The lectureship is named for Karl Jansky, a pioneer in the field of radio astronomy and the first to detect radio waves from a cosmic source.

Scoville's research currently focuses on the formation and evolution of galaxies and their central black holes, as studied using the Cosmic Evolution Survey (COSMOS). The survey maps galaxies as a function of cosmic time by observing the redshift in their light spectra. Redshift is the physical phenomenon in which the light spectrum emitted by an object will be shifted toward longer, redder wavelengths, due to the object's movement away from an observer. Scoville is interested in mapping large-scale structures of the universe at high redshift—such structures would include superclusters of galaxies that form the "cosmic web." He is currently using the new Atacama Large Millimeter Array (ALMA) to investigate the evolution of star formation in the early Universe and colliding starburst galaxies nearby.

Scoville arrived at Caltech as a professor in 1984. He has previously been the director of Caltech's Owens Valley Radio Observatory, and his previous awards include a Guggenheim Fellowship, and the University of Arizona's Aaronson Lectureship, awarded for excellence in astronomical research. As Jansky Lecturer, Scoville will give public lectures at NRAO facilities in Charlottesville, Virginia; Green Bank, West Virginia; and Socorro, New Mexico.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Pages

Subscribe to RSS - PMA