AAS Honors John Johnson for Observational Astronomical Research

John Johnson, assistant professor of astronomy at Caltech, has been named the recipient of the American Astronomical Society's 2012 Newton Lacy Pierce Prize, which is awarded for outstanding achievement in observational astronomical research based on measurements of radiation from an astronomical object.

According to the award citation, Johnson was selected "for major contributions to understanding fundamental relationships between extrasolar planets and their parent stars."

Johnson's primary research focus is on the detection and characterization of exoplanets. Recently, he led a team of astronomers that discovered the three smallest confirmed planets ever detected outside our solar system.

"I'm proud because this reflects well on my entire team here at Caltech, as well as my collaborators at other institutions," said Johnson. "We put in a lot of hard work and we strive to produce the best, highest impact, and most trustworthy exoplanet science out there. This award is a nice validation that we're on the right track."

As the recipient of the Pierce Prize, Johnson will receive a cash award and has been invited to give the plenary talk at the AAS meeting in Long Beach next year.

"I can remember sitting in a plenary talk as a grad student thinking about how nice it would be to one day do research worthy of such an audience," says Johnson. "It looks like I've arrived."


Allison Benter
Exclude from News Hub: 

Astronomers Release Unprecedented Data Set on Celestial Objects that Brighten and Dim

PASADENA, Calif.—Astronomers from the California Institute of Technology (Caltech) and the University of Arizona have released the largest data set ever collected that documents the brightening and dimming of stars and other celestial objects—two hundred million in total.

The night sky is filled with objects like asteroids that dash across the sky and others—like exploding stars and variable stars-that flash, dim, and brighten. Studying such phenomena can help astronomers better understand the evolution of stars, massive black holes in the centers of galaxies, and the structure of the Milky Way. These types of objects were also essential for the recent discovery of dark energy-the mysterious energy that dominates the expansion of the universe—which earned last year's Nobel Prize.

Using the Catalina Real-Time Transient Survey (CRTS), a project led by Caltech, the astronomers systematically scanned the heavens for these dynamic objects, producing an unprecedented data set that will allow scientists worldwide to pursue new research.

"Exploring variable objects and transient phenomena like stellar explosions is one of the most vibrant and growing research areas in astrophysics," says S. George Djorgovski, professor of astronomy at Caltech and principal investigator on the CRTS. "In many cases, this yields unique information needed to understand these objects."

The new data set is based on observations taken with the 0.7-meter telescope on Mt. Bigelow in Arizona. The observations were part of the Catalina Sky Survey (CSS), a search for Near-Earth Objects (NEOs)—asteroids that may pose a threat to Earth-conducted by astronomers at the University of Arizona. By repeatedly taking pictures of large swaths of the sky and comparing these images to previous ones, the CRTS is able to monitor the brightness of about half a billion objects, allowing it to search for those that dramatically brighten or dim. In this way, the CRTS team identified tens of thousands of variables, maximizing the science that can be gleaned from the original data.

The new data set contains the so-called brightness histories of a total of two hundred million stars and other objects, incorporating over 20 billion independent measurements. "This set of objects is an order of magnitude larger than the largest previously available data sets of their kind," says Andrew Drake, a staff scientist at Caltech and lead author on a poster to be presented at the meeting of the American Astronomical Society in Austin on January 12. "It will enable many interesting studies by the entire astronomical community."

One of the unique features of the survey, Drake says, is that it emphasizes an open-data philosophy. "We discover transient events and publish them electronically in real time, so that anyone can follow them and make additional discoveries," he explains.

"It is a good example of scientific-data sharing and reuse," Djorgovski says. "We hope to set an example of how data-intensive science should be done in the 21st century."

The data set include over a thousand exploding stars called supernovae, including many unusual and novel types, as well as hundreds of so-called cataclysmic variables, which are pairs of stars in which one spills matter onto another, called a white dwarf; tens of thousands of other variable stars; and dwarf novae, which are binary stars that dramatically change in brightness.

"We take hundreds of images every night from each of our telescopes as we search for hazardous asteroids," adds Edward Beshore, principal investigator of the University of Arizona's asteroid-hunting CSS. "As far back as 2005, we were asking if this data could be useful to the community of astronomers. We are delighted that we could forge this partnership. In my estimation, it has been a great success and is a superb example of finding ways to get greater value from taxpayers' investments in basic science."

The team says they soon plan to release additional data taken with a 1.5-meter telescope on Mt. Lemmon in Arizona and a 0.5-meter telescope in Siding Spring in Australia.

In addition to Djorgovski, Drake, and Beshore, the team includes staff scientist Ashish Mahabal, computational scientist Matthew Graham, postdoctoral scholar Ciro Donalek, and research scientist Roy Williams from Caltech. Researchers from other institutions include Steve Larson, Andrea Boattini, Alex Gibbs, Al Grauer, Rik Hill, and Richard Kowalski from the University of Arizona; Mauricio Catelan from Universidad Catholica in Chile; Eric Christensen from the Gemini Observatory in Hawaii; and Jose Prieto from Princeton University. The Caltech research is supported by the National Science Foundation. The work done at the University of Arizona is supported by NASA.

Marcus Woo
Exclude from News Hub: 
News Type: 
Research News

Aschbacher Receives Steele Prize

Michael Aschbacher, the Shaler Arthur Hanisch Professor of Mathematics, has been awarded the 2012 Leroy P. Steele Prize for Mathematical Exposition by the American Mathematical Society (AMS). Aschbacher, along with coauthors Richard Lyons of Rutgers University, Steve Smith of the University of Illinois at Chicago, and Ronald Solomon of Ohio State University, were recognized for a paper on the classification of certain types of groups, which are fundamental mathematical objects.

"AMS prizes are big honors, and we are proud that Michael has gotten this prize," says Barry Simon, the IBM Professor of Mathematics and Theoretical Physics." Over the last few decades, Aschbacher has played a leading role in the classification of so-called finite simple groups-an achievement that, Simon says, is one of the major mathematical accomplishments of the last 50 years, earning Aschbacher the Rolf Schock Prize from the Royal Swedish Academy of Sciences last year. "The mathematical proof involved in that work is so complicated that even experts in the specialty haven't absorbed it all," Simon says. "What Aschbacher and his coauthors got the Steele Prize for is an exposition for professional mathematicians that's one part of a wider program. Making this material accessible to mathematicians in different fields of mathematics is an important accomplishment."

In addition to the Schock Prize, Aschbacher has also received the Cole Prize in Algebra from AMS, and he is a member of the National Academy of Sciences and the American Academy of Arts and Sciences. The Steele Prize was awarded for the paper titled "The classification of finite simple groups: groups of characteristic 2 type," published in Mathematical Surveys and Monographs, Vol. 172.

Marcus Woo
Exclude from News Hub: 

The "Supernova of a Generation" Shows Its Stuff

Astronomers determine how the brightest and closest stellar explosion in 25 years blew up

PASADENA, Calif.—It was the brightest and closest stellar explosion seen from Earth in 25 years, dazzling professional and backyard astronomers alike. Now, thanks to this rare discovery—which some have called the "supernova of a generation"—astronomers have the most detailed picture yet of how this kind of explosion happens. Known as a Type Ia supernova, this type of blast is an essential tool that allows scientists to measure the expansion of the universe and understand the very nature of the cosmos.

"What caused these explosions has divided the astronomical community deeply," says Shri Kulkarni, the John D. and Catherine T. MacArthur Professor of Astronomy and Planetary Sciences. But this new supernova—dubbed SN2011fe—can help astronomers solve this longstanding mystery. "SN2011fe is like the Rosetta Stone of Type Ia supernovae," says Kulkarni, who is also the principal investigator on the Palomar Transient Factory (PTF). Led by the California Institute of Technology (Caltech), the PTF is designed to survey the skies for transient flashes of light that last for a few days or months, such as those emitted by exploding stars.

On August 24, the PTF team discovered the supernova in one of the arms of the Pinwheel Galaxy (also called M101), 21 million light years away. They caught the supernova just 11 hours after it exploded.

"Never before have we seen a stellar thermonuclear explosion so soon after it happened," says Lars Bildsten, professor of theoretical astrophysics at the Kavli Institute for Theoretical Physics at UC Santa Barbara, and member of the PTF team, which described its supernova findings in the December 15 issue of the journal Nature.

The PTF team uses an automated system to search for supernovae, and because they were able to point their telescopes at SN2011fe so quickly after its detonation, the astronomers were able to put together a blow-by-blow analysis of the explosion, determining that the supernova involves a dense, Earth-sized object called a white dwarf and, most likely, a main-sequence star (a star in the main stage of its life).

Scientists have long suspected that Type Ia supernovae involve a binary system of two stars in orbit around each other, with one of those stars being a white dwarf. The white dwarf, which is made out of carbon and oxygen, explodes when matter from its companion star spills over onto its surface. But no one is sure what kind of star the companion is. Scientists have suggested that it's another white dwarf, a main-sequence star, a helium star, or a star in a late life stage that's puffed up into a red giant.

Still, because the explosion always involves a white dwarf, its overall brightness and behavior is relatively predictable, making it a useful tool for measuring distances. Since all Type Ia supernovae produce about the same amount of light, those that appear dimmer must be farther away. In this way, by measuring the brightness of supernovae, astronomers can use them as cosmic meter sticks to determine the size of the universe—and how fast it's expanding. In fact, the work that earned the 2011 Nobel Prize in physics—the discovery that the expansion of the universe is speeding up—was based on observations using Type Ia supernovae.

"This discovery is exciting because the supernova's infancy and proximity allows us to directly see what the progenitor system is," explains Mansi Kasliwal, an astronomer at the Carnegie Institution for Science who is a recent Caltech doctoral graduate and a coauthor on the paper. "We have expected for a while that a Type Ia supernova involves a carbon-oxygen white dwarf, but now we have direct evidence."

In the case of SN2011fe, the researchers were also able to deduce, by process of elimination, that the companion star is most likely a main-sequence star. How do they know?

If the companion was a red giant, the explosion of the white dwarf would send a shock wave through the red giant, heating it. This scenario would have generated several tens of times more light than the astronomers observed. Additionally, it happens that the Hubble Space Telescope took images of the location where SN2011fe lived before it blew up. When the researchers looked at the data, they found no evidence of red giants or helium stars.

If the companion was another white dwarf, the interactions between the companion and the explosion would produce light in the optical and ultraviolet wavelengths. Since none of this sort of radiation was seen coming from SN2011fe, it is less likely that the companion was a white dwarf.

These results—which they describe in a companion paper in the same issue of Nature—along with X-ray and radio observations that also fail to see any evidence for red giants or helium stars, rule those out as the companion. Caltech postdoc Assaf Horesh is the lead author on the paper describing the X-ray and radio data, which will be published in The Astrophysical Journal
The astronomers have also observed, in unprecedented detail, the material that's blown off during the explosion. In particular, the team detected oxygen hurtling out from the supernova at speeds of over 20,000 kilometers per second—the first time anyone has seen high-speed oxygen coming from a Type Ia supernova, according to the researchers. "These observations probe the thin, outermost layers of the explosion," Bildsten says. "These are the parts that are moving the fastest, for which we have never been able to see this mix of atomic elements."

Not only was the supernova detected quickly, but the data processing—performed by researchers led by Peter Nugent, staff scientist at Lawrence Berkeley National Laboratory—was also done within hours. The machine-learning algorithms developed by Joshua Bloom, an associate professor at UC Berkeley, also helped make the fast find possible. And because the astronomers caught the blast so soon after it ignited, and because it's so close, the researchers say SN2011fe will become one of the best-studied supernovae ever.

"The rapid discovery and classification of SN2011fe—all on the same night—is a testament to the great teamwork between all the researchers from over a half a dozen institutions," Kulkarni says. "The future looks very bright. Soon we should be finding supernovae at an even younger age and thereby better understand how these explosions happen."

Nugent is the lead author on the Nature paper, which is titled, "Supernova 2011fe from an exploding carbon-oxygen white dwarf star." The lead author on the companion paper, "Exclusion of a luminous red giant as a companion star to the progenitor of supernova SN 2011fe," is Weidong Li of UC Berkeley. The Astrophysical Journal paper is titled, "Early radio and X-ray observations of the youngest nearby type Ia supernova PTF11kly (SN 2011fe)."

The Palomar Transient Factory (PTF) uses the 48-inch Oschin Schmidt telescope and the 60-inch telescope of the Palomar Observatory of Caltech for its observations and is a collaboration between Caltech, Columbia University, Las Cumbres Observatory Global Telescope, Lawrence Berkeley National Laboratory, Oxford University, UC Berkeley, and the Weizmann Institute of Science.

NOTE: Weidong Li died on December 12, just before the publication of these papers.

Marcus Woo
Exclude from News Hub: 
News Type: 
Research News

More Clues in the Hunt for the Higgs

Physicists unveil the largest amount of data ever presented for the Higgs search

PASADENA, Calif.—Physicists have announced that the Large Hadron Collider (LHC) has produced yet more tantalizing hints for the existence of the Higgs boson. The European Center for Nuclear Research (CERN) in Geneva, the international team of thousands of scientists—including many from the California Institute of Technology (Caltech)—unveiled for the first time all the data taken over the last year from the two main detectors at the LHC: the Compact Muon Solenoid (CMS) and ATLAS (A Toroidal LHC ApparatuS). The results represent the largest amount of data ever presented for the Higgs search.

The Higgs boson is a hypothesized particle that endows every other particle with mass, and is the presumed last piece of the so-called Standard Model, the theory that describes how every particle interacts. According to physicists, the discovery of the Higgs boson, in whatever form it may take, is crucial for understanding the fundamental laws of physics.  

The team says they have seen what they call "excess events"—a slight surplus of particle-collision events over what would be expected if the Higgs didn't exist. This suggests that the particle might have a mass between 115 and 127 gigaelectron volts (GeV, a unit of mass; in comparison, the mass of a proton is about 1 GeV). While the physicists can't yet claim discovery of the elusive particle, they are closer than ever, having ruled out a very large range of the Higgs's possible masses with great certainty.

But if physicists do not find the Higgs in the remaining mass range between 115 and 127 GeV, then it means the particle—if it exists at all—is of a more exotic form, requiring new theories of physics. Although remarkably successful, the Standard Model is incomplete, and an exotic form of the Higgs could help point the way toward a more complete theory—which physicists say is an exciting challenge.

"This is the beginning of a game-changing time for particle physics," says Harvey Newman, professor of physics. Along with professor of physics Maria Spiropulu, Newman leads Caltech's group that works on the CMS detector.

The LHC searches for the Higgs boson by slamming together protons at near-light speeds, producing new particles like the Higgs in the process. The problem is that the particle is exceptionally short-lived, decaying into other smaller particles within a tiny fraction of a second of its birth. To find the Higgs, physicists have to pick through the remains of each proton collision and reconstruct what happened.

If the collisions successfully produce a Higgs, then the particle can decay in several ways, depending on its mass. The Caltech team helped analyze three of these decays, called channels, in which the Higgs either decays into two photons, a pair of particles called W bosons, or another pair called Z bosons.

Graduate students Yong Yang, Yousi Ma, Jan Veverka, and Vladlen Timciuc are all studying the photon-photon channel, searching for the Higgs as well as possible signs of new physics phenomena. Caltech Tolman Postdoctoral Scholar Emanuele Di Marco, helped lead the analysis of the W-boson channel for the CMS team. Spiropulu, graduate student Chris Rogan, and other colleagues have also done preparatory studies for the Z-boson channel, which they published in the Physical Review in 2010.

"The impressive data produced by the CMS is the result of the experiment's ability to cleanly identify and precisely measure the energies of photons, electrons, and positrons," says Adi Bornheim, a Caltech staff scientist who heads the electromagnetic calorimeter (ECAL) detector group at CMS. Composed of 76,000 crystal detectors and weighing in at more than 90 tons, the calorimeter measures the energy of the electrons and protons produced by LHC collisions with high resolution.

"For the last 17 years, our group has led the way in constructing and calibrating the calorimeter, which has to be extremely precise for these demanding studies," adds Marat Gataullin, an assistant scientist at Caltech.

The collision experiments in 2010 and 2011 at the LHC operated only at half their designed energy levels, the researchers say. Still, the experiments exceeded design specifications for the proton beam's focus and intensity, producing about one quadrillion (a million billion) proton collisions and resulting in millions of gigabytes of data. Even though the latest experiments were more complicated than ever, the scientists in the ATLAS and CMS teams say they were able to analyze the data in record time, using new technology pioneered and developed by Newman's group at Caltech.

In order to confirm once and for all whether the Higgs exists as physicists understand it—or if they'll need to come up with new theories—the LHC will need be cranked up to collide protons with more energy. "We now need more data-three or four times the data that we expect the LHC to deliver in 2012," Newman says. The LHC is currently operating at around seven teraelectron volts (a TeV is a thousand times larger than a GeV), and scientists are considering boosting up the energy in the next year, which will help in the search. The LHC is designed to smash protons using energies as high as 14 TeV. "We foresee reaching 14 TeV by 2015, boosting the intensity by ten times starting in around 2022, and cranking up the energy to 33 TeV starting about 20 years from now," he adds. "This will open a vast new realm for exploration, and will surely revolutionize our understanding of the nature of matter and forces at the most basic level."

The Caltech CMS group, which includes eight graduate students and several postdocs, engineers, and technical staff, is working on many other projects in addition to the Higgs search, such as exploring supersymmetry (a theory that says every particle has a "supersymmetric" partner), searching for other exotic, theoretical particles, and developing new kinds of particle detectors.

"We're grateful for the achievements of the LHC team and our colleagues with CMS," Spiropulu says. "We are working hard on the final stage of improving the experiments and on publishing the results-both about the Higgs and possible new, exciting theories of physics-in the coming weeks and months."

For more information, go to the Caltech CMS website (http://hep.caltech.edu/cms), the CMS public site (http://cms.web.cern.ch/), and the American CMS site (http://uscms.org).

Marcus Woo

High-Energy Physicists Set Record for Network Data Transfer

With a sustained data rate of 186 gigabits per second, high-energy physicists demonstrate the efficient use of long-range networks to support cutting-edge science

PASADENA, Calif.—Researchers have set a new world record for data transfer, helping to usher in the next generation of high-speed network technology. At the SuperComputing 2011 (SC11) conference in Seattle during mid-November, the international team transferred data in opposite directions at a combined rate of 186 gigabits per second (Gbps) in a wide-area network circuit. The rate is equivalent to moving two million gigabytes per day, fast enough to transfer nearly 100,000 full Blu-ray disks—each with a complete movie and all the extras—in a day.

The team of high-energy physicists, computer scientists, and network engineers was led by the California Institute of Technology (Caltech), the University of Victoria, the University of Michigan, the European Center for Nuclear Research (CERN), Florida International University, and other partners.

According to the researchers, the achievement will help establish new ways to transport the increasingly large quantities of data that traverse continents and oceans via global networks of optical fibers. These new methods are needed for the next generation of network technology—which allows transfer rates of 40 and 100 Gbps—that will be built in the next couple of years.

"Our group and its partners are showing how massive amounts of data will be handled and transported in the future," says Harvey Newman, professor of physics and head of the high-energy physics (HEP) team. "Having these tools in our hands allows us to engage in realizable visions others do not have. We can see a clear path to a future others cannot yet imagine with any confidence."

Using a 100-Gbps circuit set up by Canada's Advanced Research and Innovation Network (CANARIE) and BCNET, a non-profit, shared IT services organization, the team was able to reach transfer rates of 98 Gbps between the University of Victoria Computing Centre located in Victoria, British Columbia, and the Washington State Convention Centre in Seattle. With a simultaneous data rate of 88 Gbps in the opposite direction, the team reached a sustained two-way data rate of 186 Gbps between two data centers, breaking the team's previous peak-rate record of 119 Gbps set in 2009.

In addition, partners from the University of Florida, the University of California at San Diego, Vanderbilt University, Brazil (Rio de Janeiro State University and the São Paulo State University), and Korea (Kyungpook National University and the Korean Institute for Science and Technology Information) helped with a larger demonstration, transferring massive amounts of data between the Caltech booth at the SC11 conference and other locations within the United States, as well as in Brazil and Korea.

The fast transfer rate is also crucial for dealing with the tremendous amounts of data coming from the Large Hadron Collider (LHC) at CERN, the particle accelerator that physicists hope will help them discover new particles and better understand the nature of matter, and space and time, solving some of the biggest mysteries of the universe. More than 100 petabytes (more than four million Blu-ray disks) of data have been processed, distributed, and analyzed using a global grid of 300 computing and storage facilities located at laboratories and universities around the world, and the data volume is expected to rise a thousand-fold as physicists crank up the collision rates and energies at the LHC.

"Enabling scientists anywhere in the world to work on the LHC data is a key objective, bringing the best minds together to work on the mysteries of the universe," says David Foster, the deputy IT department head at CERN.

"The 100-Gbps demonstration at SC11 is pushing the limits of network technology by showing that it is possible to transfer petascale particle physics data in a matter of hours to anywhere around the world," adds Randall Sobie, a research scientist at the Institute of Particle Physics in Canada and team member.

The key to discovery, the researchers say, is in picking out the rare signals that may indicate new physics discoveries from a sea of potentially overwhelming background noise caused by already understood particle interactions. To do this, individual physicists and small groups located around the world must repeatedly access—and sometimes extract and transport—multiterabyte data sets on demand from petabyte data stores. That's equivalent to grabbing hundreds of Blu-ray movies all at once from a pool of hundreds of thousands. The HEP team hopes that the demonstrations at SC11 will pave the way towards more effective distribution and use for discoveries of the masses of LHC data.

"By sharing our methods and tools with scientists in many fields, we hope that the research community will be well positioned to further enable their discoveries, taking full advantage of 100 Gbps networks as they become available," Newman says. "In particular, we hope that these developments will afford physicists and young students the opportunity to participate directly in the LHC's next round of discoveries as they emerge."

More information about the demonstration can be found at http://supercomputing.caltech.edu. See a video about the demonstration here.

This work was supported by the U.S. Department of Energy Office of Science and the National Science Foundation, in cooperation with the funding agencies of the international partners. Equipment and support was also provided by the team's industry partners: CIENA, Brocade, Mellanox, Dell and Force10 (now Dell/Force10), and Supermicro.

Harvey B. Newman, Professor of Physics
(626) 395-6656

Marcus Woo

Snowflake Science

Caltech Physicist Explains Why Snowflakes Are So Thin and Flat

We've all heard that no two snowflakes are alike. Caltech professor of physics Kenneth Libbrecht will tell you that this has to do with the ever-changing conditions in the clouds where snow crystals form. Now Libbrecht, widely known as the snowflake guru, has shed some light on a grand puzzle in snowflake science: why the canonical, six-armed "stellar" snowflakes wind up so thin and flat.

Few people pay close attention to the form that snow crystals—a.k.a. snowflakes—take as they fall from the sky. But in the late 1990s, Libbrecht's interest in the tiny white doilies was piqued. The physicist, who until then had worked to better understand the sun and to detect cosmic gravitational waves, happened across an article describing one of many common snowflake structures—a capped column, which looks something like an icy thread bobbin under the microscope. Such a snowflake starts out, as all do, as a hexagonal crystal of ice. As it grows, accumulating water molecules from the air, it forms a tiny column. Then it encounters conditions elsewhere in the cloud that promote the growth of platelike structures, so it ends up with platelike caps at both ends of the column.

"I read about capped columns, and I just thought, 'I grew up in snow country. How come I've never seen one of these?'" Libbrecht says. The next time he went home to North Dakota, he grabbed a magnifying glass and headed outside. "I saw capped columns. I saw all these different snowflakes," he says. "It's very easy. It's just that I had never looked."

Since then, he has published seven books of snowflake photographs, including a field guide for other eager snowflake watchers. And his library of snowflake images boasts more than 10,000 photographs. But Libbrecht is a physicist, so beyond capturing stunning pictures, he wanted to understand the molecular dynamics that dictate how ice crystals grow. For that, he's developed methods for growing and analyzing snowflakes in the lab.

Now Libbrecht believes he's on his way to explaining one of the major outstanding questions of snowflake science—a question at the heart of his original interest in capped columns all those years ago. Scientists have known for more than 75 years that at conditions typically found in snowflake-producing clouds, ice crystals follow a standard pattern of growth: near -2°C, they grow into thin, platelike forms; near -5°C, they create slender columns and needles; near   -15°C, they become really thin plates; and at temperatures below -30°C, they're back to columns. But no one has been able to explain why such relatively small changes in temperature yield such dramatic changes in snowflake structure.

Libbrecht started his observations with the thinnest, largest platelike snowflakes, which form around -15°C in high humidity. Some of these snowflakes are about as sharp as the edge of a razor blade. "What I found in my experiments," Libbrecht says, "is a growth instability, or sharpening effect." He noticed that as a snow crystal develops at -15°C, the top edge starts to develop a little bump of a ledge, which gets sharp at the tip. Basically, the corners stick out a bit farther toward the moist air, so they grow faster. And a cycle begins: "As soon as the ledge gets a little bit sharper, then it grows faster, and if it grows faster, then it gets sharper still, creating a positive feedback effect," Libbrecht says. "In the atmosphere, it would just get bigger and bigger and thinner and thinner, and eventually you'd get a really nice, beautiful snowflake."

If this sharpening effect occurs at other temperatures, which is likely, then it explains how small changes in temperature can yield such wildly varying snowflake structures. "The sharpening effect can yield thin plates or slender columns, just by changing directions," Libbrecht says. "That's a big piece of the puzzle, because now you don't have to make these enormous changes to get different structures. You just have to explain why the instability tips to produce plates at some temperatures, and tips to make columns at other temperatures. The flip-flopping of the sharpening effect nicely explains how the ice growth rates can change by a factor of 1000 when the temperature changes by just a few degrees.”

Libbrecht can't yet fully explain the underlying molecular mechanisms that produce the sharpening effect or exactly why different temperatures lead to sharpening on different faces of growing snow crystals. "But," he says, "this is a real advance in snowflake science. Now you can explain why the plates are so thin and the columns are so tall."

Kimm Fesenmaier

Caltech-Led Team of Astronomers Finds 18 New Planets

Discovery is the largest collection of confirmed planets around stars more massive than the sun

PASADENA, Calif.—Discoveries of new planets just keep coming and coming. Take, for instance, the 18 recently found by a team of astronomers led by scientists at the California Institute of Technology (Caltech).

"It's the largest single announcement of planets in orbit around stars more massive than the sun, aside from the discoveries made by the Kepler mission," says John Johnson, assistant professor of astronomy at Caltech and the first author on the team's paper, which was published in the December issue of The Astrophysical Journal Supplement Series. The Kepler mission is a space telescope that has so far identified more than 1,200 possible planets, though the majority of those have not yet been confirmed.

Using the Keck Observatory in Hawaii—with follow-up observations using the McDonald and Fairborn Observatories in Texas and Arizona, respectively—the researchers surveyed about 300 stars. They focused on those dubbed "retired" A-type stars that are more than one and a half times more massive than the sun. These stars are just past the main stage of their life—hence, "retired"—and are now puffing up into what's called a subgiant star.

To look for planets, the astronomers searched for stars of this type that wobble, which could be caused by the gravitational tug of an orbiting planet. By searching the wobbly stars' spectra for Doppler shifts—the lengthening and contracting of wavelengths due to motion away from and toward the observer—the team found 18 planets with masses similar to Jupiter's.

This new bounty marks a 50 percent increase in the number of known planets orbiting massive stars and, according to Johnson, provides an invaluable population of planetary systems for understanding how planets—and our own solar system—might form. The researchers say that the findings also lend further support to the theory that planets grow from seed particles that accumulate gas and dust in a disk surrounding a newborn star.

According to this theory, tiny particles start to clump together, eventually snowballing into a planet. If this is the true sequence of events, the characteristics of the resulting planetary system—such as the number and size of the planets, or their orbital shapes—will depend on the mass of the star. For instance, a more massive star would mean a bigger disk, which in turn would mean more material to produce a greater number of giant planets.

In another theory, planets form when large amounts of gas and dust in the disk spontaneously collapse into big, dense clumps that then become planets. But in this picture, it turns out that the mass of the star doesn't affect the kinds of planets that are produced.

So far, as the number of discovered planets has grown, astronomers are finding that stellar mass does seem to be important in determining the prevalence of giant planets. The newly discovered planets further support this pattern—and are therefore consistent with the first theory, the one stating that planets are born from seed particles.

"It's nice to see all these converging lines of evidence pointing toward one class of formation mechanisms," Johnson says.

There's another interesting twist, he adds: "Not only do we find Jupiter-like planets more frequently around massive stars, but we find them in wider orbits." If you took a sample of 18 planets around sunlike stars, he explains, half of them would orbit close to their stars. But in the cases of the new planets, all are farther away, at least 0.7 astronomical units from their stars. (One astronomical unit, or AU, is the distance from Earth to the sun.)

In systems with sunlike stars, gas giants like Jupiter acquire close orbits when they migrate toward their stars. According to theories of planet formation, gas giants could only have formed far from their stars, where it's cold enough for their constituent gases and ices to exist. So for gas giants to orbit nearer to their stars, certain gravitational interactions have to take place to pull these planets in. Then, some other mechanism—perhaps the star's magnetic field—has to kick in to stop them from spiraling into a fiery death.

The question, Johnson says, is why this doesn't seem to happen with so-called hot Jupiters orbiting massive stars, and whether that dearth is due to nature or nurture. In the nature explanation, Jupiter-like planets that orbit massive stars just wouldn't ever migrate inward. In the nurture interpretation, the planets would move in, but there would be nothing to prevent them from plunging into their stars. Or perhaps the stars evolve and swell up, consuming their planets. Which is the case? According to Johnson, subgiants like the A stars they were looking at in this paper simply don't expand enough to gobble up hot Jupiters. So unless A stars have some unique characteristic that would prevent them from stopping migrating planets—such as a lack of a magnetic field early in their lives—it looks like the nature explanation is the more plausible one.

The new batch of planets have yet another interesting pattern: their orbits are mainly circular, while planets around sunlike stars span a wide range of circular to elliptical paths. Johnson says he's now trying to find an explanation.

For Johnson, these discoveries have been a long time coming. This latest find, for instance, comes from an astronomical survey that he started while a graduate student; because these planets have wide orbits, they can take a couple of years to make a single revolution, meaning that it can also take quite a few years before their stars' periodic wobbles become apparent to an observer. Now, the discoveries are finally coming in. "I liken it to a garden—you plant the seeds and put a lot of work into it," he says. "Then, a decade in, your garden is big and flourishing. That's where I am right now. My garden is full of these big, bright, juicy tomatoes—these Jupiter-sized planets."

The other authors on the The Astrophysical Journal Supplement Series paper, "Retired A stars and their companions VII. Eighteen new Jovian planets," include former Caltech undergraduate Christian Clanton, who graduated in 2010; Caltech postdoctoral scholar Justin Crepp; and nine others from the Institute for Astronomy at the University of Hawaii; the University of California, Berkeley; the Center of Excellence in Information Systems at Tennessee State University; the McDonald Observatory at the University of Texas, Austin; and the Pennsylvania State University. The research was supported by the National Science Foundation and NASA.

Marcus Woo
Exclude from News Hub: 
News Type: 
Research News

Caltech Ranked First in Physical Sciences

Caltech's physical-sciences program is number one among world universities in this year's Times Higher Education rankings, sharing the top spot with Princeton.

"We're pleased that Caltech is recognized as one of the world's best universities in the physical sciences," says Tom Soifer, chair of the Division of Physics, Math and Astronomy. "We take great pride in our research and in educating the world's leading scientists of the future."

Last year, Caltech's physical-sciences program was second to Harvard. This year, Harvard drops to sixth while UC Berkeley, MIT, and Stanford fill out the top five. Times Higher Education has also listed Caltech as the top university in the world and has placed Caltech's engineering and technology program first.

In addition to physical sciences and engineering and technology, Times Higher Education World University Rankings 2011–2012 ranks four other subjects: arts and humanities; clinical, preclinical, and health; life sciences; and social sciences. Out of the top 50 physical-sciences programs, 27 are in the United States.

The rankings are based on data compiled by Thomson Reuters. For the complete list of the world's top 50 physical sciences-programs—as well as the rest of the rankings and all the performance indicators—go to the Times Higher Education website.

Marcus Woo
Exclude from News Hub: 
News Type: 
In Our Community

Particles and Pants

New Faculty Join PMA

Ryan Patterson is no stranger to Caltech. As an undergraduate, the Mississippi native studied physics, devoting a good part of his last two years to doing research with Professor of Physics Brad Filippone, building an electron gun to help calibrate an experiment that analyzed the decay of ultracold neutrons. "It was a lot of fun," Patterson recalls.

After graduating in 2000, Patterson got his PhD in physics at Princeton, where he studied the elusive neutrino, a nearly massless particle that zips around at almost the speed of light. He returned to Caltech as a postdoctoral scholar in 2007, and, in 2010, joined the faculty of the Division of Physics, Mathematics and Astronomy as an assistant professor. His work and day-to-day life as a professor at Caltech is an entirely new experience, he says: "It's completely different. It doesn't even really feel like the same place."

Patterson's research focuses on the mysterious nature of the neutrino. Produced in nuclear reactions—such as in stars or in nuclear power plants—neutrinos hardly interact with anything. In fact, billions of them are harmlessly surging through your body this very second.

Most of Patterson's attention is now centered on NOvA, a new neutrino experiment that's scheduled to start running in 2013. NOvA will measure so-called muon neutrinos that are being produced at Fermilab near Chicago. One of its main goals is to learn if the muon neutrinos are turning into another type called electron neutrinos. Measuring how many of these transformations take place—if at all—will help physicists determine a parameter called the mixing angle. Knowing this number is key to some of the big fundamental questions in physics, such as why the universe is full of matter instead of antimatter.

According to current theory, the big bang should have created equal amounts of matter and antimatter. But when the two interact, they annihilate each other and produce energy, meaning that stars, planets, and us—all made out of ordinary matter—shouldn't exist. And since we do exist, there somehow must have been a bit more matter than antimatter at the beginning. "We should've all been annihilated at the beginning of the universe," Patterson says. "Neutrinos may hold the key as to why that asymmetry is there, and we're trying to understand that."

Vladimir Markovic comes to Caltech from the University of Warwick, where he was on the mathematics faculty for 10 years. He also spent time as an associate professor at SUNY Stony Brook. Having just arrived in Pasadena in August, the professor of mathematics is the newest member of the PMA faculty.

Markovic studies the shapes and structures of mathematical spaces called manifolds. (For example, a line is a one-dimensional manifold while a plane is a two-dimensional one.) For the mathematically minded, he's an expert in low-dimensional geometry and Teichmüller theory. In particular, he has worked with something called the "good pants homology," which involves a mathematical object with three holes that has the same topological properties as a pair of pants, which has a hole for the waist and two for the legs. He combines these structures, mathematically stitching the pants together at their holes to create new structures. For example, attaching two pairs of pants at their waists would result in a new structure with four holes (the four legs). "You can say that what I do for a living is gluing pants," Markovic quips. Gluing more and more pants together produces increasingly complicated surfaces, and his goal is to understand the properties of those surfaces.

Born in Germany, Markovic studied mathematics at the University of Belgrade, receiving his PhD in 1998. Math was one of his two passions in school; soccer was the other. Mathematics, he says, is also about competition—it's about seeking new challenges in much the same way that mountain climbers want to scale higher and higher peaks. "I like solving problems. It's really the big draw for me, and it's still what excites me."

Marcus Woo