The "Supernova of a Generation" Shows Its Stuff

Astronomers determine how the brightest and closest stellar explosion in 25 years blew up

PASADENA, Calif.—It was the brightest and closest stellar explosion seen from Earth in 25 years, dazzling professional and backyard astronomers alike. Now, thanks to this rare discovery—which some have called the "supernova of a generation"—astronomers have the most detailed picture yet of how this kind of explosion happens. Known as a Type Ia supernova, this type of blast is an essential tool that allows scientists to measure the expansion of the universe and understand the very nature of the cosmos.

"What caused these explosions has divided the astronomical community deeply," says Shri Kulkarni, the John D. and Catherine T. MacArthur Professor of Astronomy and Planetary Sciences. But this new supernova—dubbed SN2011fe—can help astronomers solve this longstanding mystery. "SN2011fe is like the Rosetta Stone of Type Ia supernovae," says Kulkarni, who is also the principal investigator on the Palomar Transient Factory (PTF). Led by the California Institute of Technology (Caltech), the PTF is designed to survey the skies for transient flashes of light that last for a few days or months, such as those emitted by exploding stars.

On August 24, the PTF team discovered the supernova in one of the arms of the Pinwheel Galaxy (also called M101), 21 million light years away. They caught the supernova just 11 hours after it exploded.

"Never before have we seen a stellar thermonuclear explosion so soon after it happened," says Lars Bildsten, professor of theoretical astrophysics at the Kavli Institute for Theoretical Physics at UC Santa Barbara, and member of the PTF team, which described its supernova findings in the December 15 issue of the journal Nature.

The PTF team uses an automated system to search for supernovae, and because they were able to point their telescopes at SN2011fe so quickly after its detonation, the astronomers were able to put together a blow-by-blow analysis of the explosion, determining that the supernova involves a dense, Earth-sized object called a white dwarf and, most likely, a main-sequence star (a star in the main stage of its life).

Scientists have long suspected that Type Ia supernovae involve a binary system of two stars in orbit around each other, with one of those stars being a white dwarf. The white dwarf, which is made out of carbon and oxygen, explodes when matter from its companion star spills over onto its surface. But no one is sure what kind of star the companion is. Scientists have suggested that it's another white dwarf, a main-sequence star, a helium star, or a star in a late life stage that's puffed up into a red giant.

Still, because the explosion always involves a white dwarf, its overall brightness and behavior is relatively predictable, making it a useful tool for measuring distances. Since all Type Ia supernovae produce about the same amount of light, those that appear dimmer must be farther away. In this way, by measuring the brightness of supernovae, astronomers can use them as cosmic meter sticks to determine the size of the universe—and how fast it's expanding. In fact, the work that earned the 2011 Nobel Prize in physics—the discovery that the expansion of the universe is speeding up—was based on observations using Type Ia supernovae.

"This discovery is exciting because the supernova's infancy and proximity allows us to directly see what the progenitor system is," explains Mansi Kasliwal, an astronomer at the Carnegie Institution for Science who is a recent Caltech doctoral graduate and a coauthor on the paper. "We have expected for a while that a Type Ia supernova involves a carbon-oxygen white dwarf, but now we have direct evidence."

In the case of SN2011fe, the researchers were also able to deduce, by process of elimination, that the companion star is most likely a main-sequence star. How do they know?

If the companion was a red giant, the explosion of the white dwarf would send a shock wave through the red giant, heating it. This scenario would have generated several tens of times more light than the astronomers observed. Additionally, it happens that the Hubble Space Telescope took images of the location where SN2011fe lived before it blew up. When the researchers looked at the data, they found no evidence of red giants or helium stars.

If the companion was another white dwarf, the interactions between the companion and the explosion would produce light in the optical and ultraviolet wavelengths. Since none of this sort of radiation was seen coming from SN2011fe, it is less likely that the companion was a white dwarf.

These results—which they describe in a companion paper in the same issue of Nature—along with X-ray and radio observations that also fail to see any evidence for red giants or helium stars, rule those out as the companion. Caltech postdoc Assaf Horesh is the lead author on the paper describing the X-ray and radio data, which will be published in The Astrophysical Journal
The astronomers have also observed, in unprecedented detail, the material that's blown off during the explosion. In particular, the team detected oxygen hurtling out from the supernova at speeds of over 20,000 kilometers per second—the first time anyone has seen high-speed oxygen coming from a Type Ia supernova, according to the researchers. "These observations probe the thin, outermost layers of the explosion," Bildsten says. "These are the parts that are moving the fastest, for which we have never been able to see this mix of atomic elements."

Not only was the supernova detected quickly, but the data processing—performed by researchers led by Peter Nugent, staff scientist at Lawrence Berkeley National Laboratory—was also done within hours. The machine-learning algorithms developed by Joshua Bloom, an associate professor at UC Berkeley, also helped make the fast find possible. And because the astronomers caught the blast so soon after it ignited, and because it's so close, the researchers say SN2011fe will become one of the best-studied supernovae ever.

"The rapid discovery and classification of SN2011fe—all on the same night—is a testament to the great teamwork between all the researchers from over a half a dozen institutions," Kulkarni says. "The future looks very bright. Soon we should be finding supernovae at an even younger age and thereby better understand how these explosions happen."

Nugent is the lead author on the Nature paper, which is titled, "Supernova 2011fe from an exploding carbon-oxygen white dwarf star." The lead author on the companion paper, "Exclusion of a luminous red giant as a companion star to the progenitor of supernova SN 2011fe," is Weidong Li of UC Berkeley. The Astrophysical Journal paper is titled, "Early radio and X-ray observations of the youngest nearby type Ia supernova PTF11kly (SN 2011fe)."

The Palomar Transient Factory (PTF) uses the 48-inch Oschin Schmidt telescope and the 60-inch telescope of the Palomar Observatory of Caltech for its observations and is a collaboration between Caltech, Columbia University, Las Cumbres Observatory Global Telescope, Lawrence Berkeley National Laboratory, Oxford University, UC Berkeley, and the Weizmann Institute of Science.

NOTE: Weidong Li died on December 12, just before the publication of these papers.

Marcus Woo
Exclude from News Hub: 
News Type: 
Research News

More Clues in the Hunt for the Higgs

Physicists unveil the largest amount of data ever presented for the Higgs search

PASADENA, Calif.—Physicists have announced that the Large Hadron Collider (LHC) has produced yet more tantalizing hints for the existence of the Higgs boson. The European Center for Nuclear Research (CERN) in Geneva, the international team of thousands of scientists—including many from the California Institute of Technology (Caltech)—unveiled for the first time all the data taken over the last year from the two main detectors at the LHC: the Compact Muon Solenoid (CMS) and ATLAS (A Toroidal LHC ApparatuS). The results represent the largest amount of data ever presented for the Higgs search.

The Higgs boson is a hypothesized particle that endows every other particle with mass, and is the presumed last piece of the so-called Standard Model, the theory that describes how every particle interacts. According to physicists, the discovery of the Higgs boson, in whatever form it may take, is crucial for understanding the fundamental laws of physics.  

The team says they have seen what they call "excess events"—a slight surplus of particle-collision events over what would be expected if the Higgs didn't exist. This suggests that the particle might have a mass between 115 and 127 gigaelectron volts (GeV, a unit of mass; in comparison, the mass of a proton is about 1 GeV). While the physicists can't yet claim discovery of the elusive particle, they are closer than ever, having ruled out a very large range of the Higgs's possible masses with great certainty.

But if physicists do not find the Higgs in the remaining mass range between 115 and 127 GeV, then it means the particle—if it exists at all—is of a more exotic form, requiring new theories of physics. Although remarkably successful, the Standard Model is incomplete, and an exotic form of the Higgs could help point the way toward a more complete theory—which physicists say is an exciting challenge.

"This is the beginning of a game-changing time for particle physics," says Harvey Newman, professor of physics. Along with professor of physics Maria Spiropulu, Newman leads Caltech's group that works on the CMS detector.

The LHC searches for the Higgs boson by slamming together protons at near-light speeds, producing new particles like the Higgs in the process. The problem is that the particle is exceptionally short-lived, decaying into other smaller particles within a tiny fraction of a second of its birth. To find the Higgs, physicists have to pick through the remains of each proton collision and reconstruct what happened.

If the collisions successfully produce a Higgs, then the particle can decay in several ways, depending on its mass. The Caltech team helped analyze three of these decays, called channels, in which the Higgs either decays into two photons, a pair of particles called W bosons, or another pair called Z bosons.

Graduate students Yong Yang, Yousi Ma, Jan Veverka, and Vladlen Timciuc are all studying the photon-photon channel, searching for the Higgs as well as possible signs of new physics phenomena. Caltech Tolman Postdoctoral Scholar Emanuele Di Marco, helped lead the analysis of the W-boson channel for the CMS team. Spiropulu, graduate student Chris Rogan, and other colleagues have also done preparatory studies for the Z-boson channel, which they published in the Physical Review in 2010.

"The impressive data produced by the CMS is the result of the experiment's ability to cleanly identify and precisely measure the energies of photons, electrons, and positrons," says Adi Bornheim, a Caltech staff scientist who heads the electromagnetic calorimeter (ECAL) detector group at CMS. Composed of 76,000 crystal detectors and weighing in at more than 90 tons, the calorimeter measures the energy of the electrons and protons produced by LHC collisions with high resolution.

"For the last 17 years, our group has led the way in constructing and calibrating the calorimeter, which has to be extremely precise for these demanding studies," adds Marat Gataullin, an assistant scientist at Caltech.

The collision experiments in 2010 and 2011 at the LHC operated only at half their designed energy levels, the researchers say. Still, the experiments exceeded design specifications for the proton beam's focus and intensity, producing about one quadrillion (a million billion) proton collisions and resulting in millions of gigabytes of data. Even though the latest experiments were more complicated than ever, the scientists in the ATLAS and CMS teams say they were able to analyze the data in record time, using new technology pioneered and developed by Newman's group at Caltech.

In order to confirm once and for all whether the Higgs exists as physicists understand it—or if they'll need to come up with new theories—the LHC will need be cranked up to collide protons with more energy. "We now need more data-three or four times the data that we expect the LHC to deliver in 2012," Newman says. The LHC is currently operating at around seven teraelectron volts (a TeV is a thousand times larger than a GeV), and scientists are considering boosting up the energy in the next year, which will help in the search. The LHC is designed to smash protons using energies as high as 14 TeV. "We foresee reaching 14 TeV by 2015, boosting the intensity by ten times starting in around 2022, and cranking up the energy to 33 TeV starting about 20 years from now," he adds. "This will open a vast new realm for exploration, and will surely revolutionize our understanding of the nature of matter and forces at the most basic level."

The Caltech CMS group, which includes eight graduate students and several postdocs, engineers, and technical staff, is working on many other projects in addition to the Higgs search, such as exploring supersymmetry (a theory that says every particle has a "supersymmetric" partner), searching for other exotic, theoretical particles, and developing new kinds of particle detectors.

"We're grateful for the achievements of the LHC team and our colleagues with CMS," Spiropulu says. "We are working hard on the final stage of improving the experiments and on publishing the results-both about the Higgs and possible new, exciting theories of physics-in the coming weeks and months."

For more information, go to the Caltech CMS website (, the CMS public site (, and the American CMS site (

Marcus Woo

High-Energy Physicists Set Record for Network Data Transfer

With a sustained data rate of 186 gigabits per second, high-energy physicists demonstrate the efficient use of long-range networks to support cutting-edge science

PASADENA, Calif.—Researchers have set a new world record for data transfer, helping to usher in the next generation of high-speed network technology. At the SuperComputing 2011 (SC11) conference in Seattle during mid-November, the international team transferred data in opposite directions at a combined rate of 186 gigabits per second (Gbps) in a wide-area network circuit. The rate is equivalent to moving two million gigabytes per day, fast enough to transfer nearly 100,000 full Blu-ray disks—each with a complete movie and all the extras—in a day.

The team of high-energy physicists, computer scientists, and network engineers was led by the California Institute of Technology (Caltech), the University of Victoria, the University of Michigan, the European Center for Nuclear Research (CERN), Florida International University, and other partners.

According to the researchers, the achievement will help establish new ways to transport the increasingly large quantities of data that traverse continents and oceans via global networks of optical fibers. These new methods are needed for the next generation of network technology—which allows transfer rates of 40 and 100 Gbps—that will be built in the next couple of years.

"Our group and its partners are showing how massive amounts of data will be handled and transported in the future," says Harvey Newman, professor of physics and head of the high-energy physics (HEP) team. "Having these tools in our hands allows us to engage in realizable visions others do not have. We can see a clear path to a future others cannot yet imagine with any confidence."

Using a 100-Gbps circuit set up by Canada's Advanced Research and Innovation Network (CANARIE) and BCNET, a non-profit, shared IT services organization, the team was able to reach transfer rates of 98 Gbps between the University of Victoria Computing Centre located in Victoria, British Columbia, and the Washington State Convention Centre in Seattle. With a simultaneous data rate of 88 Gbps in the opposite direction, the team reached a sustained two-way data rate of 186 Gbps between two data centers, breaking the team's previous peak-rate record of 119 Gbps set in 2009.

In addition, partners from the University of Florida, the University of California at San Diego, Vanderbilt University, Brazil (Rio de Janeiro State University and the São Paulo State University), and Korea (Kyungpook National University and the Korean Institute for Science and Technology Information) helped with a larger demonstration, transferring massive amounts of data between the Caltech booth at the SC11 conference and other locations within the United States, as well as in Brazil and Korea.

The fast transfer rate is also crucial for dealing with the tremendous amounts of data coming from the Large Hadron Collider (LHC) at CERN, the particle accelerator that physicists hope will help them discover new particles and better understand the nature of matter, and space and time, solving some of the biggest mysteries of the universe. More than 100 petabytes (more than four million Blu-ray disks) of data have been processed, distributed, and analyzed using a global grid of 300 computing and storage facilities located at laboratories and universities around the world, and the data volume is expected to rise a thousand-fold as physicists crank up the collision rates and energies at the LHC.

"Enabling scientists anywhere in the world to work on the LHC data is a key objective, bringing the best minds together to work on the mysteries of the universe," says David Foster, the deputy IT department head at CERN.

"The 100-Gbps demonstration at SC11 is pushing the limits of network technology by showing that it is possible to transfer petascale particle physics data in a matter of hours to anywhere around the world," adds Randall Sobie, a research scientist at the Institute of Particle Physics in Canada and team member.

The key to discovery, the researchers say, is in picking out the rare signals that may indicate new physics discoveries from a sea of potentially overwhelming background noise caused by already understood particle interactions. To do this, individual physicists and small groups located around the world must repeatedly access—and sometimes extract and transport—multiterabyte data sets on demand from petabyte data stores. That's equivalent to grabbing hundreds of Blu-ray movies all at once from a pool of hundreds of thousands. The HEP team hopes that the demonstrations at SC11 will pave the way towards more effective distribution and use for discoveries of the masses of LHC data.

"By sharing our methods and tools with scientists in many fields, we hope that the research community will be well positioned to further enable their discoveries, taking full advantage of 100 Gbps networks as they become available," Newman says. "In particular, we hope that these developments will afford physicists and young students the opportunity to participate directly in the LHC's next round of discoveries as they emerge."

More information about the demonstration can be found at See a video about the demonstration here.

This work was supported by the U.S. Department of Energy Office of Science and the National Science Foundation, in cooperation with the funding agencies of the international partners. Equipment and support was also provided by the team's industry partners: CIENA, Brocade, Mellanox, Dell and Force10 (now Dell/Force10), and Supermicro.

Harvey B. Newman, Professor of Physics
(626) 395-6656

Marcus Woo

Snowflake Science

Caltech Physicist Explains Why Snowflakes Are So Thin and Flat

We've all heard that no two snowflakes are alike. Caltech professor of physics Kenneth Libbrecht will tell you that this has to do with the ever-changing conditions in the clouds where snow crystals form. Now Libbrecht, widely known as the snowflake guru, has shed some light on a grand puzzle in snowflake science: why the canonical, six-armed "stellar" snowflakes wind up so thin and flat.

Few people pay close attention to the form that snow crystals—a.k.a. snowflakes—take as they fall from the sky. But in the late 1990s, Libbrecht's interest in the tiny white doilies was piqued. The physicist, who until then had worked to better understand the sun and to detect cosmic gravitational waves, happened across an article describing one of many common snowflake structures—a capped column, which looks something like an icy thread bobbin under the microscope. Such a snowflake starts out, as all do, as a hexagonal crystal of ice. As it grows, accumulating water molecules from the air, it forms a tiny column. Then it encounters conditions elsewhere in the cloud that promote the growth of platelike structures, so it ends up with platelike caps at both ends of the column.

"I read about capped columns, and I just thought, 'I grew up in snow country. How come I've never seen one of these?'" Libbrecht says. The next time he went home to North Dakota, he grabbed a magnifying glass and headed outside. "I saw capped columns. I saw all these different snowflakes," he says. "It's very easy. It's just that I had never looked."

Since then, he has published seven books of snowflake photographs, including a field guide for other eager snowflake watchers. And his library of snowflake images boasts more than 10,000 photographs. But Libbrecht is a physicist, so beyond capturing stunning pictures, he wanted to understand the molecular dynamics that dictate how ice crystals grow. For that, he's developed methods for growing and analyzing snowflakes in the lab.

Now Libbrecht believes he's on his way to explaining one of the major outstanding questions of snowflake science—a question at the heart of his original interest in capped columns all those years ago. Scientists have known for more than 75 years that at conditions typically found in snowflake-producing clouds, ice crystals follow a standard pattern of growth: near -2°C, they grow into thin, platelike forms; near -5°C, they create slender columns and needles; near   -15°C, they become really thin plates; and at temperatures below -30°C, they're back to columns. But no one has been able to explain why such relatively small changes in temperature yield such dramatic changes in snowflake structure.

Libbrecht started his observations with the thinnest, largest platelike snowflakes, which form around -15°C in high humidity. Some of these snowflakes are about as sharp as the edge of a razor blade. "What I found in my experiments," Libbrecht says, "is a growth instability, or sharpening effect." He noticed that as a snow crystal develops at -15°C, the top edge starts to develop a little bump of a ledge, which gets sharp at the tip. Basically, the corners stick out a bit farther toward the moist air, so they grow faster. And a cycle begins: "As soon as the ledge gets a little bit sharper, then it grows faster, and if it grows faster, then it gets sharper still, creating a positive feedback effect," Libbrecht says. "In the atmosphere, it would just get bigger and bigger and thinner and thinner, and eventually you'd get a really nice, beautiful snowflake."

If this sharpening effect occurs at other temperatures, which is likely, then it explains how small changes in temperature can yield such wildly varying snowflake structures. "The sharpening effect can yield thin plates or slender columns, just by changing directions," Libbrecht says. "That's a big piece of the puzzle, because now you don't have to make these enormous changes to get different structures. You just have to explain why the instability tips to produce plates at some temperatures, and tips to make columns at other temperatures. The flip-flopping of the sharpening effect nicely explains how the ice growth rates can change by a factor of 1000 when the temperature changes by just a few degrees.”

Libbrecht can't yet fully explain the underlying molecular mechanisms that produce the sharpening effect or exactly why different temperatures lead to sharpening on different faces of growing snow crystals. "But," he says, "this is a real advance in snowflake science. Now you can explain why the plates are so thin and the columns are so tall."

Kimm Fesenmaier

Caltech-Led Team of Astronomers Finds 18 New Planets

Discovery is the largest collection of confirmed planets around stars more massive than the sun

PASADENA, Calif.—Discoveries of new planets just keep coming and coming. Take, for instance, the 18 recently found by a team of astronomers led by scientists at the California Institute of Technology (Caltech).

"It's the largest single announcement of planets in orbit around stars more massive than the sun, aside from the discoveries made by the Kepler mission," says John Johnson, assistant professor of astronomy at Caltech and the first author on the team's paper, which was published in the December issue of The Astrophysical Journal Supplement Series. The Kepler mission is a space telescope that has so far identified more than 1,200 possible planets, though the majority of those have not yet been confirmed.

Using the Keck Observatory in Hawaii—with follow-up observations using the McDonald and Fairborn Observatories in Texas and Arizona, respectively—the researchers surveyed about 300 stars. They focused on those dubbed "retired" A-type stars that are more than one and a half times more massive than the sun. These stars are just past the main stage of their life—hence, "retired"—and are now puffing up into what's called a subgiant star.

To look for planets, the astronomers searched for stars of this type that wobble, which could be caused by the gravitational tug of an orbiting planet. By searching the wobbly stars' spectra for Doppler shifts—the lengthening and contracting of wavelengths due to motion away from and toward the observer—the team found 18 planets with masses similar to Jupiter's.

This new bounty marks a 50 percent increase in the number of known planets orbiting massive stars and, according to Johnson, provides an invaluable population of planetary systems for understanding how planets—and our own solar system—might form. The researchers say that the findings also lend further support to the theory that planets grow from seed particles that accumulate gas and dust in a disk surrounding a newborn star.

According to this theory, tiny particles start to clump together, eventually snowballing into a planet. If this is the true sequence of events, the characteristics of the resulting planetary system—such as the number and size of the planets, or their orbital shapes—will depend on the mass of the star. For instance, a more massive star would mean a bigger disk, which in turn would mean more material to produce a greater number of giant planets.

In another theory, planets form when large amounts of gas and dust in the disk spontaneously collapse into big, dense clumps that then become planets. But in this picture, it turns out that the mass of the star doesn't affect the kinds of planets that are produced.

So far, as the number of discovered planets has grown, astronomers are finding that stellar mass does seem to be important in determining the prevalence of giant planets. The newly discovered planets further support this pattern—and are therefore consistent with the first theory, the one stating that planets are born from seed particles.

"It's nice to see all these converging lines of evidence pointing toward one class of formation mechanisms," Johnson says.

There's another interesting twist, he adds: "Not only do we find Jupiter-like planets more frequently around massive stars, but we find them in wider orbits." If you took a sample of 18 planets around sunlike stars, he explains, half of them would orbit close to their stars. But in the cases of the new planets, all are farther away, at least 0.7 astronomical units from their stars. (One astronomical unit, or AU, is the distance from Earth to the sun.)

In systems with sunlike stars, gas giants like Jupiter acquire close orbits when they migrate toward their stars. According to theories of planet formation, gas giants could only have formed far from their stars, where it's cold enough for their constituent gases and ices to exist. So for gas giants to orbit nearer to their stars, certain gravitational interactions have to take place to pull these planets in. Then, some other mechanism—perhaps the star's magnetic field—has to kick in to stop them from spiraling into a fiery death.

The question, Johnson says, is why this doesn't seem to happen with so-called hot Jupiters orbiting massive stars, and whether that dearth is due to nature or nurture. In the nature explanation, Jupiter-like planets that orbit massive stars just wouldn't ever migrate inward. In the nurture interpretation, the planets would move in, but there would be nothing to prevent them from plunging into their stars. Or perhaps the stars evolve and swell up, consuming their planets. Which is the case? According to Johnson, subgiants like the A stars they were looking at in this paper simply don't expand enough to gobble up hot Jupiters. So unless A stars have some unique characteristic that would prevent them from stopping migrating planets—such as a lack of a magnetic field early in their lives—it looks like the nature explanation is the more plausible one.

The new batch of planets have yet another interesting pattern: their orbits are mainly circular, while planets around sunlike stars span a wide range of circular to elliptical paths. Johnson says he's now trying to find an explanation.

For Johnson, these discoveries have been a long time coming. This latest find, for instance, comes from an astronomical survey that he started while a graduate student; because these planets have wide orbits, they can take a couple of years to make a single revolution, meaning that it can also take quite a few years before their stars' periodic wobbles become apparent to an observer. Now, the discoveries are finally coming in. "I liken it to a garden—you plant the seeds and put a lot of work into it," he says. "Then, a decade in, your garden is big and flourishing. That's where I am right now. My garden is full of these big, bright, juicy tomatoes—these Jupiter-sized planets."

The other authors on the The Astrophysical Journal Supplement Series paper, "Retired A stars and their companions VII. Eighteen new Jovian planets," include former Caltech undergraduate Christian Clanton, who graduated in 2010; Caltech postdoctoral scholar Justin Crepp; and nine others from the Institute for Astronomy at the University of Hawaii; the University of California, Berkeley; the Center of Excellence in Information Systems at Tennessee State University; the McDonald Observatory at the University of Texas, Austin; and the Pennsylvania State University. The research was supported by the National Science Foundation and NASA.

Marcus Woo
Exclude from News Hub: 
News Type: 
Research News

Caltech Ranked First in Physical Sciences

Caltech's physical-sciences program is number one among world universities in this year's Times Higher Education rankings, sharing the top spot with Princeton.

"We're pleased that Caltech is recognized as one of the world's best universities in the physical sciences," says Tom Soifer, chair of the Division of Physics, Math and Astronomy. "We take great pride in our research and in educating the world's leading scientists of the future."

Last year, Caltech's physical-sciences program was second to Harvard. This year, Harvard drops to sixth while UC Berkeley, MIT, and Stanford fill out the top five. Times Higher Education has also listed Caltech as the top university in the world and has placed Caltech's engineering and technology program first.

In addition to physical sciences and engineering and technology, Times Higher Education World University Rankings 2011–2012 ranks four other subjects: arts and humanities; clinical, preclinical, and health; life sciences; and social sciences. Out of the top 50 physical-sciences programs, 27 are in the United States.

The rankings are based on data compiled by Thomson Reuters. For the complete list of the world's top 50 physical sciences-programs—as well as the rest of the rankings and all the performance indicators—go to the Times Higher Education website.

Marcus Woo
Exclude from News Hub: 
News Type: 
In Our Community

Particles and Pants

New Faculty Join PMA

Ryan Patterson is no stranger to Caltech. As an undergraduate, the Mississippi native studied physics, devoting a good part of his last two years to doing research with Professor of Physics Brad Filippone, building an electron gun to help calibrate an experiment that analyzed the decay of ultracold neutrons. "It was a lot of fun," Patterson recalls.

After graduating in 2000, Patterson got his PhD in physics at Princeton, where he studied the elusive neutrino, a nearly massless particle that zips around at almost the speed of light. He returned to Caltech as a postdoctoral scholar in 2007, and, in 2010, joined the faculty of the Division of Physics, Mathematics and Astronomy as an assistant professor. His work and day-to-day life as a professor at Caltech is an entirely new experience, he says: "It's completely different. It doesn't even really feel like the same place."

Patterson's research focuses on the mysterious nature of the neutrino. Produced in nuclear reactions—such as in stars or in nuclear power plants—neutrinos hardly interact with anything. In fact, billions of them are harmlessly surging through your body this very second.

Most of Patterson's attention is now centered on NOvA, a new neutrino experiment that's scheduled to start running in 2013. NOvA will measure so-called muon neutrinos that are being produced at Fermilab near Chicago. One of its main goals is to learn if the muon neutrinos are turning into another type called electron neutrinos. Measuring how many of these transformations take place—if at all—will help physicists determine a parameter called the mixing angle. Knowing this number is key to some of the big fundamental questions in physics, such as why the universe is full of matter instead of antimatter.

According to current theory, the big bang should have created equal amounts of matter and antimatter. But when the two interact, they annihilate each other and produce energy, meaning that stars, planets, and us—all made out of ordinary matter—shouldn't exist. And since we do exist, there somehow must have been a bit more matter than antimatter at the beginning. "We should've all been annihilated at the beginning of the universe," Patterson says. "Neutrinos may hold the key as to why that asymmetry is there, and we're trying to understand that."

Vladimir Markovic comes to Caltech from the University of Warwick, where he was on the mathematics faculty for 10 years. He also spent time as an associate professor at SUNY Stony Brook. Having just arrived in Pasadena in August, the professor of mathematics is the newest member of the PMA faculty.

Markovic studies the shapes and structures of mathematical spaces called manifolds. (For example, a line is a one-dimensional manifold while a plane is a two-dimensional one.) For the mathematically minded, he's an expert in low-dimensional geometry and Teichmüller theory. In particular, he has worked with something called the "good pants homology," which involves a mathematical object with three holes that has the same topological properties as a pair of pants, which has a hole for the waist and two for the legs. He combines these structures, mathematically stitching the pants together at their holes to create new structures. For example, attaching two pairs of pants at their waists would result in a new structure with four holes (the four legs). "You can say that what I do for a living is gluing pants," Markovic quips. Gluing more and more pants together produces increasingly complicated surfaces, and his goal is to understand the properties of those surfaces.

Born in Germany, Markovic studied mathematics at the University of Belgrade, receiving his PhD in 1998. Math was one of his two passions in school; soccer was the other. Mathematics, he says, is also about competition—it's about seeking new challenges in much the same way that mountain climbers want to scale higher and higher peaks. "I like solving problems. It's really the big draw for me, and it's still what excites me."

Marcus Woo

Oceans of Water in a Planet-Forming Disk

Astronomers have detected massive quantities of water in a planet-forming gas disk around a young star. The water—which is frozen in the icy outer regions of the disk—could fill Earth's oceans several thousand times over. The discovery, published in the October 21 issue of the journal Science, could help explain how Earth got its oceans and suggests that our planet may not be the only watery world in the cosmos.

"This new result shows that the reservoir of water ice in such a disk is huge," says Darek Lis, a senior research associate in physics at Caltech and a coauthor on the paper. If other planet-forming disks also have such copious amounts of water, then there's a greater chance that other planets are also wet. "Water-covered planets like Earth may be quite common," he says.

To make the discovery, the team of researchers, which includes Caltech professor of planetary science Geoff Blake and JPL's John Pearson, pointed the Herschel Space Observatory at a star called TW Hydrae, located 175 light years away. TW Hydrae, which is only about 10 million years old, is surrounded by a disk of gas—just as the young sun was about 4.6 billion years ago.

The team found the water vapor—which previously had never been detected in the outer regions of such a disk—using Herschel's Heterodyne Instrument for the Far Infrared (HIFI). The vapor, the researchers say, likely is produced when ultraviolet light from the central and other nearby stars bombards large reservoirs of ice in the disk.

Lis, Blake, and colleagues estimate that the disk holds several thousand oceans' worth of water ice. The fact that there's so much water in this embryonic planetary system means that the outer part of the solar nebula—the gas disk that formed our solar system—could have been chock-full of ice as well. Such a large source of water was crucial for the creation of Earth's seas. According to the current theory of solar-system formation, water was scarce in the inner part of solar nebula, where Earth formed about 4.5 billion years ago. "Water is essential to life as we know it," Blake says. "But the early Earth is predicted to have been hot and dry." Earth's water, then, must have come from somewhere else. One likely source? Comets.

Comets, often called dirty snowballs, are chunks of ice and rock that orbit the sun in long, swooping trajectories. Because they spend most of their time in the frigid outer-edges of the solar system, comets can contain prodigious quantities of water ice, and collisions of a few million comets with Earth could have brought enough water to create the oceans. A few million comets may sound like a lot, but there was a tremendous amount of debris flying around back then, and with possibly as many as trillions of icy objects in the outer solar system, the researchers explain, only a tiny fraction would have needed to hit Earth.

If this story is true, ample water should exist in the outer disk where comets form—which is exactly what the astronomers just discovered in TW Hydrae. "These results beautifully confirm the notion that the critical reservoir of ice in forming planetary systems lies well outside the formation zone of Earthlike planets," Blake says.

The TW Hydrae measurements come on the heels of the discovery that the chemical signature of water ice in comet Hartley 2 is similar to the signature of Earth's oceans, published online on October 5 in the journal Nature. In the Nature paper, Lis and Blake, along with Caltech postdoctoral scholar Martin Emprechtinger, measured the ratio of deuterium (an isotope of hydrogen with an extra neutron in its nucleus) to regular hydrogen in water ice evaporated from Hartley 2. The ratio was very similar to the ratios in Earth's ocean water, supporting the idea that the seas did come from the skies.

Previous measurements of the composition of ice in other comets revealed chemical signatures different from those of our oceans, suggesting that Earth got most of its water from asteroids. These other comets, however, were from the Oort cloud, a distant collection of up to trillions of icy bodies enveloping the entire solar system. Hartley 2 comes from the Kuiper Belt, a belt of objects at the edge of the solar system. Therefore, Lis says, "icy bodies in the outer solar system—the Kuiper Belt—could have been the source of Earth's water. These findings are yet another important step in our quest to understand the origin of life on Earth and assess possibilities of life in other planetary systems."

To read more, see the JPL press releases on TW Hydrae and comet Hartley 2.

Marcus Woo
Exclude from News Hub: 
News Type: 
Research News

Thorne Selected to Receive Graduate Education Award

Kip Thorne, Caltech's Feynman Professor of Theoretical Physics, Emeritus, has been selected to receive the 2012 John David Jackson Excellence in Graduate Physics Education Award from the American Association of Physics Teachers (AAPT).

Thorne has won many awards over the years—including the 1996 Lilienfeld Prize of the American Physical Society, the 2004 California Scientist of the Year, and a 2010 UNESCO Niels Bohr Gold Medal—in recognition of his contributions to the current understanding of black holes and gravitational waves.  The Jackson Award recognizes another aspect of his career: his contributions as a teacher and mentor. 

Thorne has been recognized previously for the role he has played in the education of young scientists—in 2000, he won an ASCIT (Associated Students of Caltech) award for his teaching of undergraduates, and in 2004, he was honored with the Caltech Graduate Student Council Mentoring Award.

According to a statement prepared by the AAPT, "Thorne has been mentor and thesis advisor for more than 50 Ph.D. physicists who have gone on to become world leaders in their chosen fields of research and teaching. A list of current leaders in relativity, gravitational waves, relativistic astrophysics, and even quantum information theory, would be heavily populated by former graduate students of Kip Thorne, together with other students who took his courses and were inspired and enabled by them."

For his part, Thorne says, "More than anything else, the thing that kept me at Caltech throughout my career was our superb graduate students and postdocs.  I have learned more from them over the years than they have learned from me.  I am grateful to them for the role they played in nominating me for this award."

In their letter of nomination, a group of Thorne's former students wrote, "Graduate physics education is under-appreciated and often neglected. We hope that the new AAPT Jackson award, and its acknowledgment of great teachers and textbook writers, like Kip, will inspire others to follow in his footsteps. We know of no one who has worked harder on his teaching and his textbooks, and with greater resulting effect, than our beloved teacher, Kip Thorne."

Thorne earned his BS from Caltech in 1962 and his PhD from Princeton University in 1965. He joined the Caltech faculty in 1966 and was promoted to emeritus status in 2009.

Thorne will receive the award in February at the 2012 AAPT Winter Meeting in Ontario, California. 

To read the full release from the AAPT, click here.

Kimm Fesenmaier
Exclude from News Hub: 

Caltech Awarded $12.6 Million for New Institute for Quantum Information and Matter

PASADENA, Calif.—The California Institute of Technology (Caltech) has been awarded $12.6 million in funding over the next five years by the National Science Foundation (NSF) to create a new Physics Frontiers Center. Dubbed the Institute for Quantum Information and Matter (IQIM), the center will bring physicists and computer scientists together to push theoretical and experimental boundaries in the study of exotic quantum states.

Every three years, the NSF selects new Physics Frontier Centers for funding based on their potential for transformational advances in the most promising research areas at the intellectual frontiers of physics. Caltech's IQIM was chosen for funding from more than 50 proposals this year. 

The NSF's decision to fund the IQIM leverages the groundwork done by the Center for Exotic Quantum Systems (CEQS), a program funded by the Gordon and Betty Moore Foundation, as well as an earlier NSF-sponsored Institute for Quantum Information (IQI). With the support of the NSF and the Moore Foundation, the new Physics Frontiers Center, CEQS, and IQI will be merged into a single entity—the Institute for Quantum Information and Matter.

"The unrestricted funds provided by the Moore Foundation had a dramatic effect on the decision to fund this Physics Frontiers Center," says Caltech president Jean-Lou Chameau. "That discretionary funding allowed the provost to provide seed money to what might otherwise have been considered a somewhat risky, unconventional field of study. Now, it is one of our most exciting and rapidly growing research initiatives."

Fundamental particles at the atomic level behave according to the laws of quantum physics, which in many respects defy common sense. At this level, individual particles of a composite system can become strongly correlated, or entangled, in such a way that they maintain their relation to one another no matter where they exist in the universe. Such quantum entanglement can endow a system with astonishing properties.

The IQIM will bring together Caltech's established theoretical programs and analytic tools for studying the quantum realm with emerging laboratory capabilities that will allow scientists to delve deeper into quantum entanglement and the unimagined behaviors it may yield. The research is aimed at making advances in basic physics, as well as helping to provide scientific foundations for designing materials with remarkable properties; additionally, this work may eventually help point the way to a quantum computer capable of solving problems that today's digital computers could never handle.

"My colleagues and I believe that an exciting frontier of 21st-century science is the exploration of the surprising phenomena that can arise in highly entangled quantum systems," says H. Jeff Kimble, the William L. Valentine Professor and professor of physics at Caltech, who will direct the IQIM. "The IQIM will provide a sustaining base for our efforts to discover new principles and phenomena at this entanglement frontier."

In addition to Kimble, the Institute for Quantum Information and Matter will be led by three codirectors: Jim Eisenstein, the current director of CEQS and the Frank J. Roshek Professor of Physics and Applied Physics; Oskar Painter, professor of applied physics and executive officer for applied physics and materials science; and John Preskill, the current director of the IQI and the Richard P. Feynman Professor of Theoretical Physics.

Studies of quantum entanglement and its applications are necessarily multidisciplinary in nature. Therefore, the 16 Caltech faculty members who will make up the core of the new center are drawn from such disciplines as physics, applied physics, and computer science. The newly renovated historic Norman Bridge Laboratory of Physics and the IQI's home base in the Annenberg Center for Information Science and Technology will serve as two central hubs for IQIM faculty on campus.

"When you bring innovative scientists and engineers together and provide them with the facilities and collaborative spaces they need, magic happens. The magic involves transforming the way we think about and impact our world," says Ares Rosakis, chair of the Division of Engineering and Applied Science (EAS) at Caltech. "I am delighted that an initial collaboration beginning in 2000 between the Division of Engineering and Applied Science (EAS) and the Division of Physics, Mathematics and Astronomy (PMA)—the Institute of Quantum Information (IQI)—planted the seeds for this new NSF institute at Caltech."

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
In Our Community