Heather Knutson Wins Astronomy Award

Heather A. Knutson, an assistant professor of planetary science at Caltech, is the 2012 recipient of the Annie Jump Cannon Award in Astronomy. Knutson received the award at the 221st meeting of the American Astronomical Society (AAS), in Long Beach, California.

The Annie Jump Cannon Award is given to a North American female astronomer within five years of receiving her PhD in the year designated for the award, for outstanding research and the promise of future research. Knutson received a cash prize of $1,500 and an invitation to speak at the recent AAS meeting.

According to the award citation, Knutson is being recognized for her "pioneering work on the characterization of exoplanetary atmospheres. Her groundbreaking observations of wavelength-dependent thermal emission of exoplanets over large fractions of their orbit enable a longitudinal mapping of brightness to reveal details of atmospheric dynamics, energy transport, inversion layers, and chemical composition. This work has expanded the rich field of planetary characterization by providing new windows into the atmospheres of planets beyond the confines of our own solar system. It has inspired numerous other theoretical and observational investigations and will serve as an important technique used with current and future space observatories to gain fundamental insight into the properties of exoplanetary atmospheres."

"It was a pleasure to accept this award from the American Astronomical Society," says Knutson. "It is good to see that studies of exoplanetary atmospheres are gaining some positive attention in the astronomy community."

Knutson is one of the founding faculty members of Caltech's new Center for Planetary Astronomy.

Brian Bell
Frontpage Title: 
Knutson Wins Award
Listing Title: 
Knutson Wins Award
Exclude from News Hub: 
Friday, January 25, 2013
Annenberg 121

Course Ombudspeople Lunch

Research Update: Atomic Motions Help Determine Temperatures Inside Earth

In December 2011, Caltech mineral-physics expert Jennifer Jackson reported that she and a team of researchers had used diamond-anvil cells to compress tiny samples of iron—the main element of the earth's core. By squeezing the samples to reproduce the extreme pressures felt at the core, the team was able to get a closer estimate of the melting point of iron. At the time, the measurements that the researchers made were unprecedented in detail. Now, they have taken that research one step further by adding infrared laser beams to the mix.

The lasers are a source of heat that, when sent through the compressed iron samples, warm them up to the point of melting.  And because the earth's core consists of a solid inner region surrounded by a liquid outer shell, the melting temperature of iron at high pressure provides an important reference point for the temperature distribution within the earth's core.

"This is the first time that anyone has combined Mössbauer spectroscopy and heating lasers to detect melting in compressed samples," says Jackson, a professor of mineral physics at Caltech and lead author of a recent paper in the journal Earth and Planetary Science Letters that outlined the team's new method. "What we found is that iron, compared to previous studies, melts at higher temperatures than what has been reported in the past."

Earlier research by other teams done at similar compressions—around 80 gigapascals—reported a range of possible melting points that topped out around 2600 Kelvin (K). Jackson's latest study indicates an iron melting point at this pressure of approximately 3025 K, suggesting that the earth's core is likely warmer than previously thought.

Knowing more about the temperature, composition, and behavior of the earth's core is essential to understanding the dynamics of the earth's interior, including the processes responsible for maintaining the earth's magnetic field. While iron makes up roughly 90 percent of the core, the rest is thought to be nickel and light elements—like silicon, sulfur, or oxygen—that are alloyed, or mixed, with the iron.

To develop and perform these experiments, Jackson worked closely with the Inelastic X-ray and Nuclear Resonant Scattering Group at the Advanced Photon Source at Argonne National Laboratory in Illinois. By laser heating the iron sample in a diamond-anvil cell and monitoring the dynamics of the iron atoms via a technique called synchrotron Mössbauer spectroscopy (SMS), the researchers were able to pinpoint a melting temperature for iron at a given pressure. The SMS signal is sensitively related to the dynamical behavior of the atoms, and can therefore detect when a group of atoms is in a molten state.

She and her team have begun experiments on iron alloys at even higher pressures, using their new approach.

"What we're working toward is a very tight constraint on the temperature of the earth's core," says Jackson. "A number of important geophysical quantities, such as the movement and expansion of materials at the base of the mantle, are dictated by the temperature of the earth's core."

"Our approach is a very elegant way to look at melting because it takes advantage of the physical principle of recoilless absorption of X-rays by nuclear resonances—the basis of the Mössbauer effect—for which Rudolf Mössbauer was awarded the Nobel Prize in Physics," says Jackson. "This particular approach to study melting has not been done at high pressures until now."

Jackson's findings not only tell us more about our own planet, but could indicate that other planets with iron-rich cores, like Mercury and Mars, may have warmer internal temperatures as well.

Her paper, "Melting of compressed iron by monitoring atomic dynamics," was published in Earth and Planetary Science Letters on January 8, 2013.

Katie Neith
Exclude from News Hub: 
News Type: 
Research News

Faulty Behavior

New earthquake fault models show that "stable" zones may contribute to the generation of massive earthquakes

PASADENA, Calif.—In an earthquake, ground motion is the result of waves emitted when the two sides of a fault move—or slip—rapidly past each other, with an average relative speed of about three feet per second. Not all fault segments move so quickly, however—some slip slowly, through a process called creep, and are considered to be "stable," or not capable of hosting rapid earthquake-producing slip.  One common hypothesis suggests that such creeping fault behavior is persistent over time, with currently stable segments acting as barriers to fast-slipping, shake-producing earthquake ruptures. But a new study by researchers at the California Institute of Technology (Caltech) and the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) shows that this might not be true.

"What we have found, based on laboratory data about rock behavior, is that such supposedly stable segments can behave differently when an earthquake rupture penetrates into them. Instead of arresting the rupture as expected, they can actually join in and hence make earthquakes much larger than anticipated," says Nadia Lapusta, professor of mechanical engineering and geophysics at Caltech and coauthor of the study, published January 9 in the journal Nature.

She and her coauthor, Hiroyuki Noda, a scientist at JAMSTEC and previously a postdoctoral scholar at Caltech, hypothesize that this is what occurred in the 2011 magnitude 9.0 Tohoku-Oki earthquake, which was unexpectedly large.

Fault slip, whether fast or slow, results from the interaction between the stresses acting on the fault and friction, or the fault's resistance to slip. Both the local stress and the resistance to slip depend on a number of factors such as the behavior of fluids permeating the rocks in the earth's crust. So, the research team formulated fault models that incorporate laboratory-based knowledge of complex friction laws and fluid behavior, and developed computational procedures that allow the scientists to numerically simulate how those model faults will behave under stress.

"The uniqueness of our approach is that we aim to reproduce the entire range of observed fault behaviors—earthquake nucleation, dynamic rupture, postseismic slip, interseismic deformation, patterns of large earthquakes—within the same physical model; other approaches typically focus only on some of these phenomena," says Lapusta.

In addition to reproducing a range of behaviors in one model, the team also assigned realistic fault properties to the model faults, based on previous laboratory experiments on rock materials from an actual fault zone—the site of the well-studied 1999 magnitude 7.6 Chi-Chi earthquake in Taiwan.

"In that experimental work, rock materials from boreholes cutting through two different parts of the fault were studied, and their properties were found to be conceptually different," says Lapusta. "One of them had so-called velocity-weakening friction properties, characteristic of earthquake-producing fault segments, and the other one had velocity-strengthening friction, the kind that tends to produce stable creeping behavior under tectonic loading. However, these 'stable' samples were found to be much more susceptible to dynamic weakening during rapid earthquake-type motions, due to shear heating."

Lapusta and Noda used their modeling techniques to explore the consequences of having two fault segments with such lab-determined fault-property combinations. They found that the ostensibly stable area would indeed occasionally creep, and often stop seismic events, but not always. From time to time, dynamic rupture would penetrate that area in just the right way to activate dynamic weakening, resulting in massive slip. They believe that this is what happened in the Chi-Chi earthquake; indeed, the quake's largest slip occurred in what was believed to be the "stable" zone.

"We find that the model qualitatively reproduces the behavior of the 2011 magnitude 9.0 Tohoku-Oki earthquake as well, with the largest slip occurring in a place that may have been creeping before the event," says Lapusta. "All of this suggests that the underlying physical model, although based on lab measurements from a different fault, may be qualitatively valid for the area of the great Tohoku-Oki earthquake, giving us a glimpse into the mechanics and physics of that extraordinary event."

If creeping segments can participate in large earthquakes, it would mean that much larger events than seismologists currently anticipate in many areas of the world are possible. That means, Lapusta says, that the seismic hazard in those areas may need to be reevaluated.

For example, a creeping segment separates the southern and northern parts of California's San Andreas Fault. Seismic hazard assessments assume that this segment would stop an earthquake from propagating from one region to the other, limiting the scope of a San Andreas quake. However, the team's findings imply that a much larger event may be possible than is now anticipated—one that might involve both the Los Angeles and San Francisco metropolitan areas.

"Lapusta and Noda's realistic earthquake fault models are critical to our understanding of earthquakes—knowledge that is essential to reducing the potential catastrophic consequences of seismic hazards," says Ares Rosakis, chair of Caltech's division of engineering and applied science. "This work beautifully illustrates the way that fundamental, interdisciplinary research in the mechanics of seismology at Caltech is having a positive impact on society."

Now that they've been proven to qualitatively reproduce the behavior of the Tohoku-Oki quake, the models may be useful for exploring future earthquake scenarios in a given region, "including extreme events," says Lapusta. Such realistic fault models, she adds, may also be used to study how earthquakes may be affected by additional factors such as man-made disturbances resulting from geothermal energy harvesting and CO2 sequestration. "We plan to further develop the modeling to incorporate realistic fault geometries of specific well-instrumented regions, like Southern California and Japan, to better understand their seismic hazard."

"Creeping fault segments can turn from stable to destructive due to dynamic weakening" appears in the January 9 issue of the journal Nature. Funding for this research was provided by the National Science Foundation; the Southern California Earthquake Center; the Gordon and Betty Moore Foundation; and the Ministry of Education, Culture, Sports, Science and Technology in Japan.

Katie Neith
Frontpage Title: 
Faulty Behavior: “Stable” Zones May Contribute to Massive Earthquakes
Exclude from News Hub: 
News Type: 
Research News

Planets Abound

Caltech-led astronomers estimate that at least 100 billion planets populate the galaxy

PASADENA, Calif.—Look up at the night sky and you'll see stars, sure. But you're also seeing planets—billions and billions of them. At least.

That's the conclusion of a new study by astronomers at the California Institute of Technology (Caltech) that provides yet more evidence that planetary systems are the cosmic norm. The team made their estimate while analyzing planets orbiting a star called Kepler-32—planets that are representative, they say, of the vast majority in the galaxy and thus serve as a perfect case study for understanding how most planets form.

"There's at least 100 billion planets in the galaxy—just our galaxy," says John Johnson, assistant professor of planetary astronomy at Caltech and coauthor of the study, which was recently accepted for publication in the Astrophysical Journal. "That's mind-boggling."

"It's a staggering number, if you think about it," adds Jonathan Swift, a postdoc at Caltech and lead author of the paper. "Basically there's one of these planets per star."

The planetary system in question, which was detected by the NASA's Kepler space telescope, contains five planets. The existence of two of those planets have already been confirmed by other astronomers. The Caltech team confirmed the remaining three, then analyzed the five-planet system and compared it to other systems found by the Kepler mission.

The planets orbit a star that is an M dwarf—a type that accounts for about three-quarters of all stars in the Milky Way. The five planets, which are similar in size to Earth and orbit close to their star, are also typical of the class of planets that the telescope has discovered orbiting other M dwarfs, Swift says. Therefore, the majority of planets in the galaxy probably have characteristics comparable to those of the five planets.

While this particular system may not be unique, what does set it apart is its coincidental orientation: the orbits of the planets lie in a plane that's positioned such that Kepler views the system edge-on. Due to this rare orientation, each planet blocks Kepler-32's starlight as it passes between the star and the Kepler telescope.

By analyzing changes in the star's brightness, the astronomers were able to determine the planets' characteristics, such as their sizes and orbital periods. This orientation therefore provides an opportunity to study the system in great detail—and because the planets represent the vast majority of planets that are thought to populate the galaxy, the team says, the system also can help astronomers better understand planet formation in general.

"I usually try not to call things 'Rosetta stones,' but this is as close to a Rosetta stone as anything I've seen," Johnson says. "It's like unlocking a language that we're trying to understand—the language of planet formation."

One of the fundamental questions regarding the origin of planets is how many of them there are. Like the Caltech group, other teams of astronomers have estimated that there is roughly one planet per star, but this is the first time researchers have made such an estimate by studying M-dwarf systems, the most numerous population of planets known.

To do that calculation, the Caltech team determined the probability that an M-dwarf system would provide Kepler-32's edge-on orientation. Combining that probability with the number of planetary systems Kepler is able to detect, the astronomers calculated that there is, on average, one planet for every one of the approximately 100 billion stars in the galaxy. But their analysis only considers planets that are in close orbits around M dwarfs—not the outer planets of an M-dwarf system, or those orbiting other kinds of stars. As a result, they say, their estimate is conservative. In fact, says Swift, a more accurate estimate that includes data from other analyses could lead to an average of two planets per star.

M-dwarf systems like Kepler-32's are quite different from our own solar system. For one, M dwarfs are cooler and much smaller than the sun. Kepler-32, for example, has half the mass of the sun and half its radius. The radii of its five planets range from 0.8 to 2.7 times that of Earth, and those planets orbit extremely close to their star. The whole system fits within just over a tenth of an astronomical unit (the average distance between Earth and the sun)—a distance that is about a third of the radius of Mercury's orbit around the sun. The fact that M-dwarf systems vastly outnumber other kinds of systems carries a profound implication, according to Johnson, which is that our solar system is extremely rare. "It's just a weirdo," he says.

The fact that the planets in M-dwarf systems are so close to their stars doesn't necessarily mean that they're fiery, hellish worlds unsuitable for life, the astronomers say. Indeed, because M dwarfs are small and cool, their temperate zone—also known as the "habitable zone," the region where liquid water might exist—is also further inward. Even though only the outermost of Kepler-32's five planets lies in its temperate zone, many other M dwarf systems have more planets that sit right in their temperate zones. 

As for how the Kepler-32 system formed, no one knows yet. But the team says its analysis places constraints on possible mechanisms. For example, the results suggest that the planets all formed farther away from the star than they are now, and migrated inward over time.

Like all planets, the ones around Kepler-32 formed from a proto-planetary disk—a disk of dust and gas that clumped up into planets around the star. The astronomers estimated that the mass of the disk within the region of the five planets was about as much as that of three Jupiters. But other studies of proto-planetary disks have shown that three Jupiter masses can't be squeezed into such a tiny area so close to a star, suggesting to the Caltech team that the planets around Kepler-32 initially formed farther out.

Another line of evidence relates to the fact that M dwarfs shine brighter and hotter when they are young, when planets would be forming. Kepler-32 would have been too hot for dust—a key planet-building ingredient—to even exist in such close proximity to the star. Previously, other astronomers had determined that the third and fourth planets from the star are not very dense, meaning that they are likely made of volatile compounds such as carbon dioxide, methane, or other ices and gases, the Caltech team says. However, those volatile compounds could not have existed in the hotter zones close to the star.

Finally, the Caltech astronomers discovered that three of the planets have orbits that are related to one another in a very specific way. One planet's orbital period lasts twice as long as another's, and the third planet's lasts three times as long as the latter's. Planets don't fall into this kind of arrangement immediately upon forming, Johnson says. Instead, the planets must have started their orbits farther away from the star before moving inward over time and settling into their current configuration.

"You look in detail at the architecture of this very special planetary system, and you're forced into saying these planets formed farther out and moved in," Johnson explains.

The implications of a galaxy chock full of planets are far-reaching, the researchers say. "It's really fundamental from an origins standpoint," says Swift, who notes that because M dwarfs shine mainly in infrared light, the stars are invisible to the naked eye. "Kepler has enabled us to look up at the sky and know that there are more planets out there than stars we can see."

In addition to Swift and Johnson, the other authors on the Astrophysical Journal paper are Caltech graduate students Timothy Morton and Benjamin Montet; Caltech postdoc Philip Muirhead; former Caltech postdoc Justin Crepp of the University of Notre Dame; and Caltech alumnus Daniel Fabrycky (BS '03) of the University of Chicago. The title of the paper is, "Characterizing the cool KOIS IV: Kepler-32 as a prototype for the formation of compact planetary systems throughout the galaxy." In addition to using Kepler, the astronomers made observations at the W. M. Keck Observatory and with the Robo-AO system at Palomar Observatory. Support for all of the telescopes was provided by the W. M. Keck Foundation, NASA, Caltech, the Inter-University Centre for Astronomy and Astrophysics, the National Science Foundation, the Mt. Cuba Astronomical Foundation, and Samuel Oschin.

Marcus Woo
Exclude from News Hub: 
News Type: 
Research News

A Close Encounter of the First Kind

Mariner 2 visits Venus in the first successful interplanetary flyby

Fifty years ago today, on December 14, 1962, Mariner 2 became the world's first successful interplanetary mission when it swept some 21,000 miles above Venus's impenetrable veil of clouds. The flyby shattered any remaining illusions that Venus, Earth's near-twin in size and orbit, might be in any way habitable. It was known that Venus's atmosphere was incredibly dense and mostly carbon dioxide. Mariner discovered that it was at least 20 times more dense than Earth's (the latest estimate is 92 times as dense) and confirmed that this thick, insulating blanket trapped the sun's heat, making Venus's surface hot enough to melt lead—even on the night side.

Spaceflight is a high-risk business even today, but back then it was so dicey that JPL built everything in pairs. The Mariners' design was based on JPL's Ranger series of moon probes and used many of the same parts. Four Rangers had been launched by then, none of which had successfully completed their missions. And the moon is right next door, in planetary terms—a mere 239,000 miles, give or take, and a couple of days' journey. Mariner 2's flight path to Venus was a gently curving trajectory 182,000,000 miles long, and it would take 109 days to get there. Attempting a Venus shot was gutsy indeed.

The first Mariner didn't go well. It was blown up by Cape Canaveral's range safety officer within five minutes of launch on July 22, 1962. A succession of errors in the guidance system, including a typo in a critical line of computer code, was sending it plunging toward a watery doom—or worse, toward the Florida coast. Mariner 2 was dispatched to Canaveral post haste, and on August 27, a little more than a month later, it managed to make it into space successfully.

The spacecraft carried six instruments, four of which were designed to study deep space throughout the entire trip. A micrometeorite counter tallied hits from cosmic dust particles, which proved to be far less abundant out in the void than they were near Earth. The plasma detector, designed to study the portion of the sun's outermost atmosphere called the corona, revealed the existence of the solar wind—a continuous stream of plasma "blowing off the boiling surface of the sun into interplanetary space," as Caltech's Engineering & Science magazine reported in October 1962, when Mariner was still millions of miles from Venus. The wind "at times reaches hurricane force with outbursts, such as solar flares, on the sun. Even though this gas is exceedingly tenuous under any terrestrial scale, it is definitely dense enough, and is moving fast enough, to be able to push the interplanetary magnetic field around as it sees fit."

Mariner's magnetometer offered another surprise: Venus, unlike Earth, had no detectable magnetic field. This hinted that Venus probably rotated too slowly to generate one; Mariner's charged-particle detector corroborated this by showing that Venus has no radiation belts equivalent to Earth's Van Allen belts, either.

As Mariner approached Venus, the other two instruments were turned on: a microwave radiometer to measure surface temperatures, and an infrared radiometer to do the same for the atmosphere. Mariner carried no cameras; since Venus was a featureless ball of clouds, there didn't seem to be any point in dragging the extra weight along.

Meanwhile, down on the ground, Caltech postdocs Bruce Murray and Robert Wildey (BS '57, MS '58, PhD '62) and staff scientist Jim Westphal were scanning the face of Venus through the 200-inch Hale Telescope at Palomar Observatory, using a recently declassified infrared detector that had been developed for the heat-seeking Sidewinder missile. The system worked in the 10-micron band—wavelengths about 20 times longer than visible light—and performed up to 50 times better than civilian technology. The detector, a germanium crystal doped with mercury atoms, owed its extreme sensitivity to being cooled to –423° F in a bath of liquid hydrogen. (And yes, the Sidewinders carried a small supply of liquid hydrogen, which would boil off during flight—the thing was designed to blow up anyway.) "It was a mess," Murray, now a professor of planetary science and geology, emeritus, recalled in his Caltech oral history. "It leaked a lot."

Caltech physics professor Gerry Neugebauer (PhD '60) was on Mariner's infrared radiometer team, and about two weeks before the Venus encounter somebody realized that it might be a good idea to try to get some confirmatory data from the ground in case the spacecraft saw something big. The planetary scientists were granted a block of "twilight time" when the sky was too bright for deep-space observations, and in the hours before sunrise on the nights of December 13 through 16, the mighty 200-inch telescope was turned toward Venus. At that focal length, a patch of clouds just a few hundred miles in diameter filled the field of view, but this extreme close-up wasn't recorded as a picture. Instead, a pen line on a paper strip chart wobbled up and down with the intensity of the light received. The telescope methodically worked along horizontal tracks from top to bottom, taking as many as 30 passes to cover the disk.

"On the first night, which was the 13th, we got just these few scans, because we hadn't the slightest idea what we were doing," recalled Westphal (who also became a Caltech professor of planetary science) in an oral history for the Smithsonian Air and Space Museum. Even so, "higher up, the thing was obviously brighter on one side than it was on the other. [On] the other side of the planet, the inverse was true." Wildey wasn't on the mountain that first night, says Westphal, so "Bruce and I . . . stood there and we looked at the damn strip chart [in] the morning twilight; and we said, what do you suppose that is?" They drew a circle representing Venus and laid the strips of paper with the scans on top of it, "and since both of us had a background in geology, we kind of contoured it. . . . Cold at the top, cold at the bottom, and hot at the middle. We both stood there, and we grinned, and we said, we know which way the pole of Venus is!" The tilt of a planet's axis and the rate at which it spins are usually measured by tracking the progress of some landmark across the face of the disk, an impossible feat given Venus's cloud cover. But the atmosphere on a rotating planet will always have a band of warm air running along the equator and cold regions at the poles. "We knew something very fundamental about Venus that nobody [else] knew," Westphal continued.

Not even the Mariner team knew—the spacecraft's radiometers were programmed to scan across the planet's limb, or edge, looking sideways thorough the atmosphere in order to find out how the temperature varied with depth. These scans proved that Venus's stultifying heat was, in fact, radiating from its surface; the atmosphere's upper reaches turned out to be ice-cold.

JPL lost contact with Mariner 2 on January 2, 1963. The spacecraft is still in orbit around the sun, but a replica built at JPL from spare parts is on display in the Smithsonian's Air and Space Museum.

Douglas Smith
Exclude from News Hub: 

Top 12 in 2012

Credit: Benjamin Deverman/Caltech

Gene therapy for boosting nerve-cell repair

Caltech scientists have developed a gene therapy that helps the brain replace its nerve-cell-protecting myelin sheaths—and the cells that produce those sheaths—when they are destroyed by diseases like multiple sclerosis and by spinal-cord injuries. Myelin ensures that nerve cells can send signals quickly and efficiently.

Credit: L. Moser and P. M. Bellan, Caltech

Understanding solar flares

By studying jets of plasma in the lab, Caltech researchers discovered a surprising phenomenon that may be important for understanding how solar flares occur and for developing nuclear fusion as an energy source. Solar flares are bursts of energy from the sun that launch chunks of plasma that can damage orbiting satellites and cause the northern and southern lights on Earth.

Coincidence—or physics?

Caltech planetary scientists provided a new explanation for why the "man in the moon" faces Earth. Their research indicates that the "man"—an illusion caused by dark-colored volcanic plains—faces us because of the rate at which the moon's spin rate slowed before becoming locked in its current orientation, even though the odds favored the moon's other, more mountainous side.

Choking when the stakes are high

In studying brain activity and behavior, Caltech biologists and social scientists learned that the more someone is afraid of loss, the worse they will perform on a given task—and that, the more loss-averse they are, the more likely it is that their performance will peak at a level far below their actual capacity.

Credit: NASA/JPL-Caltech

Eyeing the X-ray universe

NASA's NuSTAR telescope, a Caltech-led and -designed mission to explore the high-energy X-ray universe and to uncover the secrets of black holes, of remnants of dead stars, of energetic cosmic explosions, and even of the sun, was launched on June 13. The instrument is the most powerful high-energy X-ray telescope ever developed and will produce images that are 10 times sharper than any that have been taken before at these energies.

Credit: CERN

Uncovering the Higgs Boson

This summer's likely discovery of the long-sought and highly elusive Higgs boson, the fundamental particle that is thought to endow elementary particles with mass, was made possible in part by contributions from a large contingent of Caltech researchers. They have worked on this problem with colleagues around the globe for decades, building experiments, designing detectors to measure particles ever more precisely, and inventing communication systems and data storage and transfer networks to share information among thousands of physicists worldwide.

Credit: Peter Day

Amplifying research

Researchers at Caltech and NASA's Jet Propulsion Laboratory developed a new kind of amplifier that can be used for everything from exploring the cosmos to examining the quantum world. This new device operates at a frequency range more than 10 times wider than that of other similar kinds of devices, can amplify strong signals without distortion, and introduces the lowest amount of unavoidable noise.

Swims like a jellyfish

Caltech bioengineers partnered with researchers at Harvard University to build a freely moving artificial jellyfish from scratch. The researchers fashioned the jellyfish from silicon and muscle cells into what they've dubbed Medusoid; in the lab, the scientists were able to replicate some of the jellyfish's key mechanical functions, such as swimming and creating feeding currents. The work will help improve researchers' understanding of tissues and how they work, and may inform future efforts in tissue engineering and the design of pumps for the human heart.

Credit: NASA/JPL-Caltech

Touchdown confirmed

After more than eight years of planning, about 354 million miles of space travel, and seven minutes of terror, NASA's Mars Science Laboratory successfully landed on the Red Planet on August 5. The roving analytical laboratory, named Curiosity, is now using its 10 scientific instruments and 17 cameras to search Mars for environments that either were once—or are now—habitable.

Credit: Caltech/Michael Hoffmann

Powering toilets for the developing world

Caltech engineers built a solar-powered toilet that can safely dispose of human waste for just five cents per use per day. The toilet design, which won the Bill and Melinda Gates Foundation's Reinventing the Toilet Challenge, uses the sun to power a reactor that breaks down water and human waste into fertilizer and hydrogen. The hydrogen can be stored as energy in hydrogen fuel cells.

Credit: Caltech / Scott Kelberg and Michael Roukes

Weighing molecules

A Caltech-led team of physicists created the first-ever mechanical device that can measure the mass of an individual molecule. The tool could eventually help doctors to diagnose diseases, and will enable scientists to study viruses, examine the molecular machinery of cells, and better measure nanoparticles and air pollution.

Splitting water

This year, two separate Caltech research groups made key advances in the quest to extract hydrogen from water for energy use. In June, a team of chemical engineers devised a nontoxic, noncorrosive way to split water molecules at relatively low temperatures; this method may prove useful in the application of waste heat to hydrogen production. Then, in September, a group of Caltech chemists identified the mechanism by which some water-splitting catalysts work; their findings should light the way toward the development of cheaper and better catalysts.


In 2012, Caltech faculty and students pursued research into just about every aspect of our world and beyond—from understanding human behavior, to exploring other planets, to developing sustainable waste solutions for the developing world.

In other words, 2012 was another year of discovery at Caltech. Here are a dozen research stories, which were among the most widely read and shared articles from Caltech.edu.

Did we skip your favorite? Connect with Caltech on Facebook to share your pick.

Calculated Science

A new supercomputer helps Caltech researchers tackle more complicated problems

One of the most powerful computer clusters available to a single department in the academic world just got stronger.

The California Institute of Technology's CITerra supercomputer, a high-performance computing cluster of the type popularly known as a Beowulf cluster, was replaced this year with a faster and more efficient system. The new cluster capitalizes on improvements in fiber-optic cables and video chips—the kind found in many gaming devices and mobile phones—to increase processing capacity and calculation speeds. With access to this improved supercomputer, Caltech's researchers are able to use advanced algorithms to analyze and simulate everything from earthquakes to global climate and weather to the atmospheres of other planets.

The new $2 million supercomputer, which is administered by the Division of Geological and Planetary Sciences, performs with five times the computational power of the previous cluster while using roughly half the energy.  It has 150 teraflops of computing capacity, meaning it can perform 150 trillion calculations per second. The upgrade was made possible in part with the private support of many individuals, including members of GPS's chair's council, a volunteer leadership board.

So what does a faster, more energy-efficient supercomputer mean for Caltech's geoscientists and geophysicists?

"There is a whole new class of problems that we can now address," says Professor of Geophysics Mark Simons, who oversees the cluster. "We can not only solve a given problem faster, but because it takes less time to solve those problems, we can choose to work on harder problems."

Simons, for instance, is working to develop models to understand what happens underground after an earthquake—and what is likely to occur in the months and years after—by analyzing ground motion observed on the surface. In 2011, for instance, a Caltech research team led by Simons used data from GPS satellites and broadband seismographic networks to develop a comprehensive mechanical model for how the ground moved after Japan's 9.0 earthquake.

"Mark's team developed the framework allowing them to do millions of computations where seismologists had only been able to do hundreds before," says Michael Gurnis, the John E. and Hazel S. Smits Professor of Geophysics and director of the Seismological Laboratory. "The ability to routinely compute at a level that is so much higher than anyone else had previously done—to have the computational resources immediately available during the hectic days after a devastating earthquake—was an amazing advance for geophysics."

Simons is not alone in using advanced computation to unlock Earth's greatest mysteries. He and Gurnis—who studies the forces driving Earth's tectonic plates—are among a group of 15 GPS faculty members who, with their students, routinely use the cluster. The division is unique among universities in that it provides its faculty with access to such a large computational facility, giving almost any of its researchers the ability to number crunch when they need to—and for extended periods of time.

Research done using computations from the previous cluster led to more than 140 published papers, which crossed the fields of atmospheric science, planetary science, Earth science, seismology, and earthquake engineering.

One of the biggest users of the new cluster is Andrew Thompson, an assistant professor of environmental science and engineering, who uses it to simulate complex ocean currents and ocean eddies. Capturing the dynamics of these small ocean storms requires large simulations that need to run for weeks.

Thanks to the size of Caltech's cluster, Thompson has been able to simulate large regions of the ocean, in particular the ocean currents around Antarctica, at high resolution. These models have led to a better understanding of how changes in off-shore currents, related to changing climate conditions, affect ocean-heat transport toward and under the ice shelves. Ocean-driven warming is believed to be critical in the melting of the West Antarctic Ice Sheet.

"Oceanography without modeling and simulations would be really challenging science," says Thompson, who arrived at Caltech in the fall of last year. "These models indicate where we need improved or more frequent observations and help us to understand how the ice sheets might respond to future ocean circulation variability. It is remarkable to have these resources at Caltech. Access to the Caltech cluster eliminates some of the need to apply for time on federal computing facilities, and has allowed my research group to hit the ground running."

Exclude from News Hub: 
News Type: 
Research News

More Evidence for an Ancient Grand Canyon

Caltech study supports theory that giant gorge dates back to Late Cretaceous period

For over 150 years, geologists have debated how and when one of the most dramatic features on our planet—the Grand Canyon—was formed. New data unearthed by researchers at the California Institute of Technology (Caltech) builds support for the idea that conventional models, which say the enormous ravine is 5 to 6 million years old, are way off.

In fact, the Caltech research points to a Grand Canyon that is many millions of years older than previously thought, says Kenneth A. Farley, Keck Foundation Professor of Geochemistry at Caltech and coauthor of the study. "Rather than being formed within the last few million years, our measurements suggest that a deep canyon existed more than 70 million years ago," he says.

Farley and Rebecca Flowers—a former postdoctoral scholar at Caltech who is now an assistant professor at the University of Colorado, Boulder—outlined their findings in a paper published in the November 29 issue of Science Express.

Building upon previous research by Farley's lab that showed that parts of the eastern canyon are likely to be at least 55 million years old, the team used a new method to test ancient rocks found at the bottom of the canyon's western section. Past experiments used the amount of helium produced by radioactive decay in apatite—a mineral found in the canyon's walls—to date the samples. This time around, Farley and Flowers took a closer look at the apatite grains by analyzing not only the amount but also the spatial distribution of helium atoms that were trapped within the crystals of the mineral as they moved closer to the surface of the earth during the massive erosion that caused the Grand Canyon to form.

Rocks buried in the earth are hot—with temperatures increasing by about 25 degrees Celsius for every kilometer of depth—but as a river canyon erodes the surface downwards towards a buried rock, that rock cools. The thermal history—shown by the helium distribution in the apatite grains—gives important clues about how much time has passed since there was significant erosion in the canyon.   

"If you can document cooling through temperatures only a few degrees warmer than the earth's surface, you can learn about canyon formation," says Farley, who is also chair of the Division of Geological and Planetary Sciences at Caltech.

The analysis of the spatial distribution of helium allowed for detection of variations in the thermal structure at shallow levels of Earth's crust, says Flowers. That gave the team dates that enabled them to fine-tune the timeframe when the Grand Canyon was incised, or cut.

"Our research implies that the Grand Canyon was directly carved to within a few hundred meters of its modern depth by about 70 million years ago," she says.

Now that they have narrowed down the "when" of the Grand Canyon's formation, the geologists plan to continue investigations into how it took shape. The genesis of the canyon has important implications for understanding the evolution of many geological features in the western United States, including their tectonics and topography, according to the team.

"Our major scientific objective is to understand the history of the Colorado Plateau—why does this large and unusual geographic feature exist, and when was it formed," says Farley. "A canyon cannot form without high elevation—you don't cut canyons in rocks below sea level. Also, the details of the canyon's incision seem to suggest large-scale changes in surface topography, possibly including large-scale tilting of the plateau."

"Apatite 4He/3He and (U-Th)/He evidence for an ancient Grand Canyon" appears in the November 29 issue of the journal Science Express. Funding for the research was provided by the National Science Foundation. 

Katie Neith
Exclude from News Hub: 
News Type: 
Research News

A Sky Full of Planets

Think back to the last time you saw the Milky Way—that faint stripe of stars that thickens and brightens as you get farther from city lights. At least 200 billion stars fill the Milky Way, our galaxy. How many planets might orbit those stars? What would those worlds be like? Twenty years ago, it was anybody's guess.  

In the 1990s, astronomers began to discover planets around other stars—so-called exoplanets. Since then, the confirmed count of exoplanets has skyrocketed to more than 850, with thousands of candidates awaiting follow-up. Astronomers now estimate that the stars in our Milky Way have an average of at least one planet each. (The next time you look up into the night sky, think about that.)

The sudden prospect of characterizing so many solar systems in our own galaxy has brought together two once-isolated camps: planetary scientists, who generally focus on the inside of our solar system, and astronomers, who mostly look beyond it. Planetary scientists see an opportunity to learn about our solar system and its origins by putting it into the context of a huge ensemble of other solar systems, and astronomers have a keen interest in what planetary scientists might help them discover about planet formation on a galactic or even larger scale.

To those ends, nine Caltech astronomers and planetary scientists are forming a Center for Planetary Astronomy. Joining together in a single research center will help them maintain fruitful collaborations, collectively attract research funding and fellowships for young scholars, and recruit top students and postdoctoral scholars.

The nascent center's members bring complementary perspectives to the characterization of our newly discovered neighbors. Planetary science professor Geoff Blake, astronomy professor Lynn Hillenbrand, and senior research associate John Carpenter study planet-forming disks of gas and dust around young stars. Mike Brown, the Richard and Barbara Rosenberg Professor and professor of planetary astronomy, and Caltech's infamous Pluto-killer, studies fossil rubble from just such a disk—a fantastic array of thousands of planetesimals and chunks of rock and ice on the fringes of our solar system, known as the Kuiper belt, that yields clues to the primordial solar system. The remaining scientists are focused more on the planets themselves. John Johnson, an assistant professor of planetary astronomy, focuses on the detection and characterization of exoplanets, searches for worlds like Earth, and investigates how stars' masses affect planet formation by studying the relationships between exoplanets and the very different types of stars that they orbit. Heather Knutson, an assistant professor of planetary science, characterizes exoplanets' compositions, temperatures, atmospheres, and even their weather. Yuk Yung, the Smits Family Professor of Planetary Science, studies the atmospheres of planets, and Dave Stevenson, the Marvin L. Goldberger Professor of Planetary Science, studies planetary interiors and how they evolve. Gregg Hallinan, an assistant professor of astronomy, is trying to detect radio signals from exoplanets, which would indicate the presence of magnetic fields that could be a signature of habitability.

The center's members are excited about its potential contribution to the major discoveries that are sure to come in this field. "The unique combination of Caltech's top-ranked astronomical facilities, astronomy program, and planetary science program will allow us to access the deep and broad knowledge about planets and planetary systems that only comes from such a joint endeavor," says Brown.

Says Knutson, "I was trained as an astronomer, but what I do is planetary science. Caltech is one of the few places where we have great conversations between the two groups. And Caltech's resources, in terms of telescopes, give us the opportunity to move quickly and think big." 

Exclude from News Hub: 
News Type: 
Research News


Subscribe to RSS - GPS