Planetary Astronomer Wins Feynman Prize for Excellence in Teaching

PASADENA, Calif.—John A. Johnson, assistant professor of planetary astronomy at the California Institute of Technology (Caltech), has been awarded the Richard P. Feynman Prize for Excellence in Teaching.

Johnson was recognized for his dedication, passion, and innovation in teaching as well as his ability to inspire his students.

The Feynman Prize was established "to honor annually a professor who demonstrates, in the broadest sense, unusual ability, creativity, and innovation in undergraduate and graduate classroom or laboratory teaching." Caltech faculty members, students, postdoctoral scholars, staff, and alumni may nominate a faculty member for the prize. A committee appointed by the provost then selects the winner.

According to the official citation, Johnson "immediately emerged as exceptional—a 'true outlier,' in the words of a committee member."

"Richard Feynman's writing inspired me to pursue physics and astronomy," Johnson says. "It is an amazing honor to have my name in any way associated with his."

Johnson was lauded for his creative teaching methods, in which he eschews traditional lectures and problem sets and instead has students work on problems in small groups. At various times, he has required students to explain what they were learning in a class blog, forbidden discussion of grades, emailed YouTube videos that illustrate the day's material, and brought in guest lecturers to discuss the course material and provide career advice. 

"My goal is to help the students take ownership of their learning by guiding them rather than lecturing them," explains Johnson, who says he learned his teaching philosophy from physicist Ronald Bieniek at the Missouri University of Science and Technology. "I'm very pleased to hear that my students feel I accomplished this goal, and that we all had such an enjoyable time in the process."

In a nomination letter, one student wrote that Johnson "rocked the boat in the astronomy department, challenging our conceptions of how astronomy, and the sciences in general, are taught." Another student wrote, "Classroom experiences that are intellectually engaging, practical, and entertaining are incredibly rare. Through his teaching style, attention to detail, and unique course structure, Professor Johnson provides just such an experience."

Many students cited Johnson's "life-changing" influence beyond academics. One called him "a remarkable teacher who can not only enlighten students in the classroom but also sculpt their spirits for their future careers." A graduate student said, "He reminded me…why I wanted to be a scientist in the first place."

Johnson, whose research focuses on searching for planets around stars other than our sun, earned his BS in physics in 1999 from the University of Missouri-Rolla (now the Missouri University of Science and Technology) and his PhD in astrophysics from the University of California, Berkeley, in 2007. After completing a postdoctoral fellowship at the University of Hawaii Institute for Astronomy, he joined Caltech's faculty in 2009. In 2012, he won the Newton Lacy Pierce Prize from the American Astronomical Society, an Alfred P. Sloan Fellowship, a David and Lucile Packard Fellowship, and a Lyman Spitzer Lectureship from Princeton. Johnson says, however, that of all the awards he has received this past year, he's most proud of the Feynman Prize.

The previous four winners of the Feynman Prize are Paul Asimow, professor of geology and geochemistry; J. Morgan Kousser, professor of history and social science; Dennis Dougherty, the George Grant Hoag Professor of Chemistry; and Jehoshua (Shuki) Bruck, the Gordon and Betty Moore Professor of Computation and Neural Systems and Electrical Engineering. The Feynman Prize has been awarded annually since 1994. Nominations for next year's prize will be solicited in the fall.

Writer: 
Marcus Woo
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Under the Hood of the Earthquake Machine

Watson Lecture Preview

 

What makes an earthquake go off? Why are earthquakes so difficult to forecast? Professor of Mechanical Engineering and Geophysics Nadia Lapusta gives us a close-up look at the moving parts, as it were, at 8:00 p.m. on Wednesday, February 13, 2013, in Caltech's Beckman Auditorium. Admission is free.

 

Q: What do you do?

A: I study friction as it relates to earthquakes. At a depth of five miles, which is the average depth at which large earthquakes in Southern California occur, the compression on the two sides of the fault is roughly equivalent to a pressure of 1,500 atmospheres. So you can imagine that friction plays an important role. I make computational models that combine our theories about friction with laboratory studies of how materials behave. We try to reproduce what seismologists, geodesists, and geologists see actual earthquakes doing, in order to infer the physical laws that govern them.

Our planet's surface is made up of a bunch of plates that are always moving, and an earthquake happens when the locked boundaries of the plates rapidly catch up with the slow motion of the plates themselves. You get a sudden shearing—a sideways motion that generates the destructive waves that we perceive as shaking.

A number of factors affect this process. If you rub your palms together, you generate heat. An earthquake is a very intensive rubbing of palms, if you will, and so a lot of heat is produced—enough to weaken the rocks and perhaps even melt them.

However, there are pore fluids permeating the rocks—we often get our drinking water from underground aquifers, for example. As these fluids heat up, they expand, which modifies the shearing process. They produce expanding cushions of steam, essentially, which reduce the friction.

The waves generated by the shearing motion put an additional load on the fault ahead of the shear zone, so they actually affect how the shearing progresses. The shear tip sprouts at about three kilometers per second, or 6,700 miles per hour. So an earthquake is a highly dynamic, nonlinear system.

To make things even more interesting, a fault doesn't just sit still for hundreds of years, waiting for the next big earthquake. It's more like a living thing—there are slow slippages between earthquakes that constantly redistribute the forces in the system, and the exact point where an earthquake initiates depends a lot on these slow motions. So we simulate thousands of years of fault history that includes a few occasional, very fast events that last for a few seconds. These calculations are very time-consuming and memory-intense. The Geological and Planetary Sciences Division's supercomputer has several thousand processors, and we routinely use 200 to 400 of them, sometimes for weeks at a time. We would happily use the entire machine, but of course people would yell at us.

 

Q: How did you get into this line of work?

A: I've loved both mathematics and physics since I was a child. I was born in Ukraine, where my mom was a professor of applied mathematics and my dad was a civil engineer. They used to give me math and physics problems from a very early age. I did my undergraduate studies in applied mathematics in Kiev, and I was thinking of going into materials science. I came to the U.S. for graduate school, and my advisor at Harvard was working on materials failure and on earthquakes, which I found very interesting because it combined math and physics with a problem relevant to society.

My PhD was on frictional sliding and some initial models of earthquakes. Caltech is actually the perfect place to continue that, because it has world-class expertise in all relevant disciplines. I have wonderful colleagues, and the really fun part is working with them. I enjoy interacting with the experimentalists and talking to the people who make field observations or do radar measurements from satellites. They have different perspectives, different terminologies, and different views of the problem, so it's fun to try to explain to them what you mean, and to try to understand what they mean. And the most fun, of course, is when you come to an understanding that leads to new science in the end.

 

Q: Speaking of societal relevance, what does your work mean for us here in L.A.?

A: Large earthquakes, fortunately, are relatively rare, so we don't have detailed observations of very many of them. Our models, however, allow us to explore scenarios for potentially very damaging earthquakes that we haven't experienced. For example, faults have locked segments and creeping segments. The San Andreas fault has a creeping segment between Los Angeles and San Francisco, and the assumption has been that this segment will confine a large earthquake to either the southern or the northern part of the fault. Only one large urban area would be affected. However, our models show that a through-going rupture may be possible. If that happens, both Los Angeles and San Francisco are affected, and you have a much bigger problem on your hands.

 

Named for the late Caltech professor Earnest C. Watson, who founded the series in 1922, the Watson Lectures present Caltech and JPL researchers describing their work to the public. Many past Watson Lectures are available online at Caltech's iTunes U site.

Writer: 
Douglas Smith
Listing Title: 
Watson Lecture: "Under the Hood of the Earthquake Machine"
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community
bbell2's picture

John Johnson Wins Astronomy Prize

John A. Johnson, assistant professor of planetary astronomy at Caltech, received the 2012 Newton Lacy Pierce Prize at the 221st meeting of the American Astronomical Society (AAS), in Long Beach, California.

The AAS reserves the Newton Lacy Pierce Prize for North American astronomers, ages 36 and under, for "outstanding achievement, over the past five years, in observational astronomical research based on measurements of radiation from an astronomical object." Johnson received a cash award and an invitation to speak at the AAS conference on January 8.

According to the award citation, Johnson was recognized for "major contributions to understanding fundamental relationships between exosolar planets and their parent stars, including finding a variety of orientations between planetary orbital planes and the spin axes of their stars, developing a rigorous understanding of planet detection rates in transit and direct imaging experiments, and examining possible correlations between planet frequency and the mass and metallicity of their host stars."

"I am very pleased and thankful to the American Astronomical Society for this award," Johnson says. "Thanks to powerful new instruments and an emerging generation of highly motivated explorers, planetary astronomy is an exciting field to be in right now. I am happy to be part of it."

Johnson is one of the founding members of Caltech's new Center for Planetary Astronomy. His recent research findings related to the estimated number of planets in the Milky Way have generated significant interest both within the astronomical community and among the general public.

In addition to the Pierce Prize, Johnson was also a recipient in 2012 of a Lyman Spitzer Lectureship, an Alfred P. Sloan Research Fellowship, and a David and Lucile Packard Fellowship.

 

 

Writer: 
Brian Bell
Frontpage Title: 
Johnson Wins Astronomy Prize
Listing Title: 
Johnson Wins Astronomy Prize
Contact: 
Writer: 
Exclude from News Hub: 
Yes
bbell2's picture

Heather Knutson Wins Astronomy Award

Heather A. Knutson, an assistant professor of planetary science at Caltech, is the 2012 recipient of the Annie Jump Cannon Award in Astronomy. Knutson received the award at the 221st meeting of the American Astronomical Society (AAS), in Long Beach, California.

The Annie Jump Cannon Award is given to a North American female astronomer within five years of receiving her PhD in the year designated for the award, for outstanding research and the promise of future research. Knutson received a cash prize of $1,500 and an invitation to speak at the recent AAS meeting.

According to the award citation, Knutson is being recognized for her "pioneering work on the characterization of exoplanetary atmospheres. Her groundbreaking observations of wavelength-dependent thermal emission of exoplanets over large fractions of their orbit enable a longitudinal mapping of brightness to reveal details of atmospheric dynamics, energy transport, inversion layers, and chemical composition. This work has expanded the rich field of planetary characterization by providing new windows into the atmospheres of planets beyond the confines of our own solar system. It has inspired numerous other theoretical and observational investigations and will serve as an important technique used with current and future space observatories to gain fundamental insight into the properties of exoplanetary atmospheres."

"It was a pleasure to accept this award from the American Astronomical Society," says Knutson. "It is good to see that studies of exoplanetary atmospheres are gaining some positive attention in the astronomy community."

Knutson is one of the founding faculty members of Caltech's new Center for Planetary Astronomy.

Writer: 
Brian Bell
Frontpage Title: 
Knutson Wins Award
Listing Title: 
Knutson Wins Award
Contact: 
Writer: 
Exclude from News Hub: 
Yes
Friday, January 25, 2013
Annenberg 121

Course Ombudspeople Lunch

Research Update: Atomic Motions Help Determine Temperatures Inside Earth

In December 2011, Caltech mineral-physics expert Jennifer Jackson reported that she and a team of researchers had used diamond-anvil cells to compress tiny samples of iron—the main element of the earth's core. By squeezing the samples to reproduce the extreme pressures felt at the core, the team was able to get a closer estimate of the melting point of iron. At the time, the measurements that the researchers made were unprecedented in detail. Now, they have taken that research one step further by adding infrared laser beams to the mix.

The lasers are a source of heat that, when sent through the compressed iron samples, warm them up to the point of melting.  And because the earth's core consists of a solid inner region surrounded by a liquid outer shell, the melting temperature of iron at high pressure provides an important reference point for the temperature distribution within the earth's core.

"This is the first time that anyone has combined Mössbauer spectroscopy and heating lasers to detect melting in compressed samples," says Jackson, a professor of mineral physics at Caltech and lead author of a recent paper in the journal Earth and Planetary Science Letters that outlined the team's new method. "What we found is that iron, compared to previous studies, melts at higher temperatures than what has been reported in the past."

Earlier research by other teams done at similar compressions—around 80 gigapascals—reported a range of possible melting points that topped out around 2600 Kelvin (K). Jackson's latest study indicates an iron melting point at this pressure of approximately 3025 K, suggesting that the earth's core is likely warmer than previously thought.

Knowing more about the temperature, composition, and behavior of the earth's core is essential to understanding the dynamics of the earth's interior, including the processes responsible for maintaining the earth's magnetic field. While iron makes up roughly 90 percent of the core, the rest is thought to be nickel and light elements—like silicon, sulfur, or oxygen—that are alloyed, or mixed, with the iron.

To develop and perform these experiments, Jackson worked closely with the Inelastic X-ray and Nuclear Resonant Scattering Group at the Advanced Photon Source at Argonne National Laboratory in Illinois. By laser heating the iron sample in a diamond-anvil cell and monitoring the dynamics of the iron atoms via a technique called synchrotron Mössbauer spectroscopy (SMS), the researchers were able to pinpoint a melting temperature for iron at a given pressure. The SMS signal is sensitively related to the dynamical behavior of the atoms, and can therefore detect when a group of atoms is in a molten state.

She and her team have begun experiments on iron alloys at even higher pressures, using their new approach.

"What we're working toward is a very tight constraint on the temperature of the earth's core," says Jackson. "A number of important geophysical quantities, such as the movement and expansion of materials at the base of the mantle, are dictated by the temperature of the earth's core."

"Our approach is a very elegant way to look at melting because it takes advantage of the physical principle of recoilless absorption of X-rays by nuclear resonances—the basis of the Mössbauer effect—for which Rudolf Mössbauer was awarded the Nobel Prize in Physics," says Jackson. "This particular approach to study melting has not been done at high pressures until now."

Jackson's findings not only tell us more about our own planet, but could indicate that other planets with iron-rich cores, like Mercury and Mars, may have warmer internal temperatures as well.

Her paper, "Melting of compressed iron by monitoring atomic dynamics," was published in Earth and Planetary Science Letters on January 8, 2013.

Writer: 
Katie Neith
Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Faulty Behavior

New earthquake fault models show that "stable" zones may contribute to the generation of massive earthquakes

PASADENA, Calif.—In an earthquake, ground motion is the result of waves emitted when the two sides of a fault move—or slip—rapidly past each other, with an average relative speed of about three feet per second. Not all fault segments move so quickly, however—some slip slowly, through a process called creep, and are considered to be "stable," or not capable of hosting rapid earthquake-producing slip.  One common hypothesis suggests that such creeping fault behavior is persistent over time, with currently stable segments acting as barriers to fast-slipping, shake-producing earthquake ruptures. But a new study by researchers at the California Institute of Technology (Caltech) and the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) shows that this might not be true.

"What we have found, based on laboratory data about rock behavior, is that such supposedly stable segments can behave differently when an earthquake rupture penetrates into them. Instead of arresting the rupture as expected, they can actually join in and hence make earthquakes much larger than anticipated," says Nadia Lapusta, professor of mechanical engineering and geophysics at Caltech and coauthor of the study, published January 9 in the journal Nature.

She and her coauthor, Hiroyuki Noda, a scientist at JAMSTEC and previously a postdoctoral scholar at Caltech, hypothesize that this is what occurred in the 2011 magnitude 9.0 Tohoku-Oki earthquake, which was unexpectedly large.

Fault slip, whether fast or slow, results from the interaction between the stresses acting on the fault and friction, or the fault's resistance to slip. Both the local stress and the resistance to slip depend on a number of factors such as the behavior of fluids permeating the rocks in the earth's crust. So, the research team formulated fault models that incorporate laboratory-based knowledge of complex friction laws and fluid behavior, and developed computational procedures that allow the scientists to numerically simulate how those model faults will behave under stress.

"The uniqueness of our approach is that we aim to reproduce the entire range of observed fault behaviors—earthquake nucleation, dynamic rupture, postseismic slip, interseismic deformation, patterns of large earthquakes—within the same physical model; other approaches typically focus only on some of these phenomena," says Lapusta.

In addition to reproducing a range of behaviors in one model, the team also assigned realistic fault properties to the model faults, based on previous laboratory experiments on rock materials from an actual fault zone—the site of the well-studied 1999 magnitude 7.6 Chi-Chi earthquake in Taiwan.

"In that experimental work, rock materials from boreholes cutting through two different parts of the fault were studied, and their properties were found to be conceptually different," says Lapusta. "One of them had so-called velocity-weakening friction properties, characteristic of earthquake-producing fault segments, and the other one had velocity-strengthening friction, the kind that tends to produce stable creeping behavior under tectonic loading. However, these 'stable' samples were found to be much more susceptible to dynamic weakening during rapid earthquake-type motions, due to shear heating."

Lapusta and Noda used their modeling techniques to explore the consequences of having two fault segments with such lab-determined fault-property combinations. They found that the ostensibly stable area would indeed occasionally creep, and often stop seismic events, but not always. From time to time, dynamic rupture would penetrate that area in just the right way to activate dynamic weakening, resulting in massive slip. They believe that this is what happened in the Chi-Chi earthquake; indeed, the quake's largest slip occurred in what was believed to be the "stable" zone.

"We find that the model qualitatively reproduces the behavior of the 2011 magnitude 9.0 Tohoku-Oki earthquake as well, with the largest slip occurring in a place that may have been creeping before the event," says Lapusta. "All of this suggests that the underlying physical model, although based on lab measurements from a different fault, may be qualitatively valid for the area of the great Tohoku-Oki earthquake, giving us a glimpse into the mechanics and physics of that extraordinary event."

If creeping segments can participate in large earthquakes, it would mean that much larger events than seismologists currently anticipate in many areas of the world are possible. That means, Lapusta says, that the seismic hazard in those areas may need to be reevaluated.

For example, a creeping segment separates the southern and northern parts of California's San Andreas Fault. Seismic hazard assessments assume that this segment would stop an earthquake from propagating from one region to the other, limiting the scope of a San Andreas quake. However, the team's findings imply that a much larger event may be possible than is now anticipated—one that might involve both the Los Angeles and San Francisco metropolitan areas.

"Lapusta and Noda's realistic earthquake fault models are critical to our understanding of earthquakes—knowledge that is essential to reducing the potential catastrophic consequences of seismic hazards," says Ares Rosakis, chair of Caltech's division of engineering and applied science. "This work beautifully illustrates the way that fundamental, interdisciplinary research in the mechanics of seismology at Caltech is having a positive impact on society."

Now that they've been proven to qualitatively reproduce the behavior of the Tohoku-Oki quake, the models may be useful for exploring future earthquake scenarios in a given region, "including extreme events," says Lapusta. Such realistic fault models, she adds, may also be used to study how earthquakes may be affected by additional factors such as man-made disturbances resulting from geothermal energy harvesting and CO2 sequestration. "We plan to further develop the modeling to incorporate realistic fault geometries of specific well-instrumented regions, like Southern California and Japan, to better understand their seismic hazard."

"Creeping fault segments can turn from stable to destructive due to dynamic weakening" appears in the January 9 issue of the journal Nature. Funding for this research was provided by the National Science Foundation; the Southern California Earthquake Center; the Gordon and Betty Moore Foundation; and the Ministry of Education, Culture, Sports, Science and Technology in Japan.

Writer: 
Katie Neith
Frontpage Title: 
Faulty Behavior: “Stable” Zones May Contribute to Massive Earthquakes
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Planets Abound

Caltech-led astronomers estimate that at least 100 billion planets populate the galaxy

PASADENA, Calif.—Look up at the night sky and you'll see stars, sure. But you're also seeing planets—billions and billions of them. At least.

That's the conclusion of a new study by astronomers at the California Institute of Technology (Caltech) that provides yet more evidence that planetary systems are the cosmic norm. The team made their estimate while analyzing planets orbiting a star called Kepler-32—planets that are representative, they say, of the vast majority in the galaxy and thus serve as a perfect case study for understanding how most planets form.

"There's at least 100 billion planets in the galaxy—just our galaxy," says John Johnson, assistant professor of planetary astronomy at Caltech and coauthor of the study, which was recently accepted for publication in the Astrophysical Journal. "That's mind-boggling."

"It's a staggering number, if you think about it," adds Jonathan Swift, a postdoc at Caltech and lead author of the paper. "Basically there's one of these planets per star."

The planetary system in question, which was detected by the NASA's Kepler space telescope, contains five planets. The existence of two of those planets have already been confirmed by other astronomers. The Caltech team confirmed the remaining three, then analyzed the five-planet system and compared it to other systems found by the Kepler mission.

The planets orbit a star that is an M dwarf—a type that accounts for about three-quarters of all stars in the Milky Way. The five planets, which are similar in size to Earth and orbit close to their star, are also typical of the class of planets that the telescope has discovered orbiting other M dwarfs, Swift says. Therefore, the majority of planets in the galaxy probably have characteristics comparable to those of the five planets.

While this particular system may not be unique, what does set it apart is its coincidental orientation: the orbits of the planets lie in a plane that's positioned such that Kepler views the system edge-on. Due to this rare orientation, each planet blocks Kepler-32's starlight as it passes between the star and the Kepler telescope.

By analyzing changes in the star's brightness, the astronomers were able to determine the planets' characteristics, such as their sizes and orbital periods. This orientation therefore provides an opportunity to study the system in great detail—and because the planets represent the vast majority of planets that are thought to populate the galaxy, the team says, the system also can help astronomers better understand planet formation in general.

"I usually try not to call things 'Rosetta stones,' but this is as close to a Rosetta stone as anything I've seen," Johnson says. "It's like unlocking a language that we're trying to understand—the language of planet formation."

One of the fundamental questions regarding the origin of planets is how many of them there are. Like the Caltech group, other teams of astronomers have estimated that there is roughly one planet per star, but this is the first time researchers have made such an estimate by studying M-dwarf systems, the most numerous population of planets known.

To do that calculation, the Caltech team determined the probability that an M-dwarf system would provide Kepler-32's edge-on orientation. Combining that probability with the number of planetary systems Kepler is able to detect, the astronomers calculated that there is, on average, one planet for every one of the approximately 100 billion stars in the galaxy. But their analysis only considers planets that are in close orbits around M dwarfs—not the outer planets of an M-dwarf system, or those orbiting other kinds of stars. As a result, they say, their estimate is conservative. In fact, says Swift, a more accurate estimate that includes data from other analyses could lead to an average of two planets per star.

M-dwarf systems like Kepler-32's are quite different from our own solar system. For one, M dwarfs are cooler and much smaller than the sun. Kepler-32, for example, has half the mass of the sun and half its radius. The radii of its five planets range from 0.8 to 2.7 times that of Earth, and those planets orbit extremely close to their star. The whole system fits within just over a tenth of an astronomical unit (the average distance between Earth and the sun)—a distance that is about a third of the radius of Mercury's orbit around the sun. The fact that M-dwarf systems vastly outnumber other kinds of systems carries a profound implication, according to Johnson, which is that our solar system is extremely rare. "It's just a weirdo," he says.

The fact that the planets in M-dwarf systems are so close to their stars doesn't necessarily mean that they're fiery, hellish worlds unsuitable for life, the astronomers say. Indeed, because M dwarfs are small and cool, their temperate zone—also known as the "habitable zone," the region where liquid water might exist—is also further inward. Even though only the outermost of Kepler-32's five planets lies in its temperate zone, many other M dwarf systems have more planets that sit right in their temperate zones. 

As for how the Kepler-32 system formed, no one knows yet. But the team says its analysis places constraints on possible mechanisms. For example, the results suggest that the planets all formed farther away from the star than they are now, and migrated inward over time.

Like all planets, the ones around Kepler-32 formed from a proto-planetary disk—a disk of dust and gas that clumped up into planets around the star. The astronomers estimated that the mass of the disk within the region of the five planets was about as much as that of three Jupiters. But other studies of proto-planetary disks have shown that three Jupiter masses can't be squeezed into such a tiny area so close to a star, suggesting to the Caltech team that the planets around Kepler-32 initially formed farther out.

Another line of evidence relates to the fact that M dwarfs shine brighter and hotter when they are young, when planets would be forming. Kepler-32 would have been too hot for dust—a key planet-building ingredient—to even exist in such close proximity to the star. Previously, other astronomers had determined that the third and fourth planets from the star are not very dense, meaning that they are likely made of volatile compounds such as carbon dioxide, methane, or other ices and gases, the Caltech team says. However, those volatile compounds could not have existed in the hotter zones close to the star.

Finally, the Caltech astronomers discovered that three of the planets have orbits that are related to one another in a very specific way. One planet's orbital period lasts twice as long as another's, and the third planet's lasts three times as long as the latter's. Planets don't fall into this kind of arrangement immediately upon forming, Johnson says. Instead, the planets must have started their orbits farther away from the star before moving inward over time and settling into their current configuration.

"You look in detail at the architecture of this very special planetary system, and you're forced into saying these planets formed farther out and moved in," Johnson explains.

The implications of a galaxy chock full of planets are far-reaching, the researchers say. "It's really fundamental from an origins standpoint," says Swift, who notes that because M dwarfs shine mainly in infrared light, the stars are invisible to the naked eye. "Kepler has enabled us to look up at the sky and know that there are more planets out there than stars we can see."

In addition to Swift and Johnson, the other authors on the Astrophysical Journal paper are Caltech graduate students Timothy Morton and Benjamin Montet; Caltech postdoc Philip Muirhead; former Caltech postdoc Justin Crepp of the University of Notre Dame; and Caltech alumnus Daniel Fabrycky (BS '03) of the University of Chicago. The title of the paper is, "Characterizing the cool KOIS IV: Kepler-32 as a prototype for the formation of compact planetary systems throughout the galaxy." In addition to using Kepler, the astronomers made observations at the W. M. Keck Observatory and with the Robo-AO system at Palomar Observatory. Support for all of the telescopes was provided by the W. M. Keck Foundation, NASA, Caltech, the Inter-University Centre for Astronomy and Astrophysics, the National Science Foundation, the Mt. Cuba Astronomical Foundation, and Samuel Oschin.

Writer: 
Marcus Woo
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

A Close Encounter of the First Kind

Mariner 2 visits Venus in the first successful interplanetary flyby

Fifty years ago today, on December 14, 1962, Mariner 2 became the world's first successful interplanetary mission when it swept some 21,000 miles above Venus's impenetrable veil of clouds. The flyby shattered any remaining illusions that Venus, Earth's near-twin in size and orbit, might be in any way habitable. It was known that Venus's atmosphere was incredibly dense and mostly carbon dioxide. Mariner discovered that it was at least 20 times more dense than Earth's (the latest estimate is 92 times as dense) and confirmed that this thick, insulating blanket trapped the sun's heat, making Venus's surface hot enough to melt lead—even on the night side.

Spaceflight is a high-risk business even today, but back then it was so dicey that JPL built everything in pairs. The Mariners' design was based on JPL's Ranger series of moon probes and used many of the same parts. Four Rangers had been launched by then, none of which had successfully completed their missions. And the moon is right next door, in planetary terms—a mere 239,000 miles, give or take, and a couple of days' journey. Mariner 2's flight path to Venus was a gently curving trajectory 182,000,000 miles long, and it would take 109 days to get there. Attempting a Venus shot was gutsy indeed.

The first Mariner didn't go well. It was blown up by Cape Canaveral's range safety officer within five minutes of launch on July 22, 1962. A succession of errors in the guidance system, including a typo in a critical line of computer code, was sending it plunging toward a watery doom—or worse, toward the Florida coast. Mariner 2 was dispatched to Canaveral post haste, and on August 27, a little more than a month later, it managed to make it into space successfully.

The spacecraft carried six instruments, four of which were designed to study deep space throughout the entire trip. A micrometeorite counter tallied hits from cosmic dust particles, which proved to be far less abundant out in the void than they were near Earth. The plasma detector, designed to study the portion of the sun's outermost atmosphere called the corona, revealed the existence of the solar wind—a continuous stream of plasma "blowing off the boiling surface of the sun into interplanetary space," as Caltech's Engineering & Science magazine reported in October 1962, when Mariner was still millions of miles from Venus. The wind "at times reaches hurricane force with outbursts, such as solar flares, on the sun. Even though this gas is exceedingly tenuous under any terrestrial scale, it is definitely dense enough, and is moving fast enough, to be able to push the interplanetary magnetic field around as it sees fit."

Mariner's magnetometer offered another surprise: Venus, unlike Earth, had no detectable magnetic field. This hinted that Venus probably rotated too slowly to generate one; Mariner's charged-particle detector corroborated this by showing that Venus has no radiation belts equivalent to Earth's Van Allen belts, either.

As Mariner approached Venus, the other two instruments were turned on: a microwave radiometer to measure surface temperatures, and an infrared radiometer to do the same for the atmosphere. Mariner carried no cameras; since Venus was a featureless ball of clouds, there didn't seem to be any point in dragging the extra weight along.

Meanwhile, down on the ground, Caltech postdocs Bruce Murray and Robert Wildey (BS '57, MS '58, PhD '62) and staff scientist Jim Westphal were scanning the face of Venus through the 200-inch Hale Telescope at Palomar Observatory, using a recently declassified infrared detector that had been developed for the heat-seeking Sidewinder missile. The system worked in the 10-micron band—wavelengths about 20 times longer than visible light—and performed up to 50 times better than civilian technology. The detector, a germanium crystal doped with mercury atoms, owed its extreme sensitivity to being cooled to –423° F in a bath of liquid hydrogen. (And yes, the Sidewinders carried a small supply of liquid hydrogen, which would boil off during flight—the thing was designed to blow up anyway.) "It was a mess," Murray, now a professor of planetary science and geology, emeritus, recalled in his Caltech oral history. "It leaked a lot."

Caltech physics professor Gerry Neugebauer (PhD '60) was on Mariner's infrared radiometer team, and about two weeks before the Venus encounter somebody realized that it might be a good idea to try to get some confirmatory data from the ground in case the spacecraft saw something big. The planetary scientists were granted a block of "twilight time" when the sky was too bright for deep-space observations, and in the hours before sunrise on the nights of December 13 through 16, the mighty 200-inch telescope was turned toward Venus. At that focal length, a patch of clouds just a few hundred miles in diameter filled the field of view, but this extreme close-up wasn't recorded as a picture. Instead, a pen line on a paper strip chart wobbled up and down with the intensity of the light received. The telescope methodically worked along horizontal tracks from top to bottom, taking as many as 30 passes to cover the disk.

"On the first night, which was the 13th, we got just these few scans, because we hadn't the slightest idea what we were doing," recalled Westphal (who also became a Caltech professor of planetary science) in an oral history for the Smithsonian Air and Space Museum. Even so, "higher up, the thing was obviously brighter on one side than it was on the other. [On] the other side of the planet, the inverse was true." Wildey wasn't on the mountain that first night, says Westphal, so "Bruce and I . . . stood there and we looked at the damn strip chart [in] the morning twilight; and we said, what do you suppose that is?" They drew a circle representing Venus and laid the strips of paper with the scans on top of it, "and since both of us had a background in geology, we kind of contoured it. . . . Cold at the top, cold at the bottom, and hot at the middle. We both stood there, and we grinned, and we said, we know which way the pole of Venus is!" The tilt of a planet's axis and the rate at which it spins are usually measured by tracking the progress of some landmark across the face of the disk, an impossible feat given Venus's cloud cover. But the atmosphere on a rotating planet will always have a band of warm air running along the equator and cold regions at the poles. "We knew something very fundamental about Venus that nobody [else] knew," Westphal continued.

Not even the Mariner team knew—the spacecraft's radiometers were programmed to scan across the planet's limb, or edge, looking sideways thorough the atmosphere in order to find out how the temperature varied with depth. These scans proved that Venus's stultifying heat was, in fact, radiating from its surface; the atmosphere's upper reaches turned out to be ice-cold.

JPL lost contact with Mariner 2 on January 2, 1963. The spacecraft is still in orbit around the sun, but a replica built at JPL from spare parts is on display in the Smithsonian's Air and Space Museum.

Writer: 
Douglas Smith
Writer: 
Exclude from News Hub: 
No

Top 12 in 2012

Frontpage Title: 
Top 12 in 2012
Slideshow: 
Credit: Benjamin Deverman/Caltech

Gene therapy for boosting nerve-cell repair

Caltech scientists have developed a gene therapy that helps the brain replace its nerve-cell-protecting myelin sheaths—and the cells that produce those sheaths—when they are destroyed by diseases like multiple sclerosis and by spinal-cord injuries. Myelin ensures that nerve cells can send signals quickly and efficiently.

Credit: L. Moser and P. M. Bellan, Caltech

Understanding solar flares

By studying jets of plasma in the lab, Caltech researchers discovered a surprising phenomenon that may be important for understanding how solar flares occur and for developing nuclear fusion as an energy source. Solar flares are bursts of energy from the sun that launch chunks of plasma that can damage orbiting satellites and cause the northern and southern lights on Earth.

Coincidence—or physics?

Caltech planetary scientists provided a new explanation for why the "man in the moon" faces Earth. Their research indicates that the "man"—an illusion caused by dark-colored volcanic plains—faces us because of the rate at which the moon's spin rate slowed before becoming locked in its current orientation, even though the odds favored the moon's other, more mountainous side.

Choking when the stakes are high

In studying brain activity and behavior, Caltech biologists and social scientists learned that the more someone is afraid of loss, the worse they will perform on a given task—and that, the more loss-averse they are, the more likely it is that their performance will peak at a level far below their actual capacity.

Credit: NASA/JPL-Caltech

Eyeing the X-ray universe

NASA's NuSTAR telescope, a Caltech-led and -designed mission to explore the high-energy X-ray universe and to uncover the secrets of black holes, of remnants of dead stars, of energetic cosmic explosions, and even of the sun, was launched on June 13. The instrument is the most powerful high-energy X-ray telescope ever developed and will produce images that are 10 times sharper than any that have been taken before at these energies.

Credit: CERN

Uncovering the Higgs Boson

This summer's likely discovery of the long-sought and highly elusive Higgs boson, the fundamental particle that is thought to endow elementary particles with mass, was made possible in part by contributions from a large contingent of Caltech researchers. They have worked on this problem with colleagues around the globe for decades, building experiments, designing detectors to measure particles ever more precisely, and inventing communication systems and data storage and transfer networks to share information among thousands of physicists worldwide.

Credit: Peter Day

Amplifying research

Researchers at Caltech and NASA's Jet Propulsion Laboratory developed a new kind of amplifier that can be used for everything from exploring the cosmos to examining the quantum world. This new device operates at a frequency range more than 10 times wider than that of other similar kinds of devices, can amplify strong signals without distortion, and introduces the lowest amount of unavoidable noise.

Swims like a jellyfish

Caltech bioengineers partnered with researchers at Harvard University to build a freely moving artificial jellyfish from scratch. The researchers fashioned the jellyfish from silicon and muscle cells into what they've dubbed Medusoid; in the lab, the scientists were able to replicate some of the jellyfish's key mechanical functions, such as swimming and creating feeding currents. The work will help improve researchers' understanding of tissues and how they work, and may inform future efforts in tissue engineering and the design of pumps for the human heart.

Credit: NASA/JPL-Caltech

Touchdown confirmed

After more than eight years of planning, about 354 million miles of space travel, and seven minutes of terror, NASA's Mars Science Laboratory successfully landed on the Red Planet on August 5. The roving analytical laboratory, named Curiosity, is now using its 10 scientific instruments and 17 cameras to search Mars for environments that either were once—or are now—habitable.

Credit: Caltech/Michael Hoffmann

Powering toilets for the developing world

Caltech engineers built a solar-powered toilet that can safely dispose of human waste for just five cents per use per day. The toilet design, which won the Bill and Melinda Gates Foundation's Reinventing the Toilet Challenge, uses the sun to power a reactor that breaks down water and human waste into fertilizer and hydrogen. The hydrogen can be stored as energy in hydrogen fuel cells.

Credit: Caltech / Scott Kelberg and Michael Roukes

Weighing molecules

A Caltech-led team of physicists created the first-ever mechanical device that can measure the mass of an individual molecule. The tool could eventually help doctors to diagnose diseases, and will enable scientists to study viruses, examine the molecular machinery of cells, and better measure nanoparticles and air pollution.

Splitting water

This year, two separate Caltech research groups made key advances in the quest to extract hydrogen from water for energy use. In June, a team of chemical engineers devised a nontoxic, noncorrosive way to split water molecules at relatively low temperatures; this method may prove useful in the application of waste heat to hydrogen production. Then, in September, a group of Caltech chemists identified the mechanism by which some water-splitting catalysts work; their findings should light the way toward the development of cheaper and better catalysts.

Body: 

In 2012, Caltech faculty and students pursued research into just about every aspect of our world and beyond—from understanding human behavior, to exploring other planets, to developing sustainable waste solutions for the developing world.

In other words, 2012 was another year of discovery at Caltech. Here are a dozen research stories, which were among the most widely read and shared articles from Caltech.edu.

Did we skip your favorite? Connect with Caltech on Facebook to share your pick.

Exclude from News Hub: 
Yes

Pages

Subscribe to RSS - GPS