Caltech geochemist Clair Patterson (1922–1995) helped galvanize the environmental movement 50 years ago when he announced that highly toxic lead could be found essentially everywhere on Earth, including in our own bodies—and that very little of it was due to natural causes.
In a paper published in the September 1965 issue of Archives of Environmental Health, Patterson challenged the prevailing belief that industrial and natural sources contributed roughly equal amounts of ingestible lead, and that the aggregate level we absorbed was safe. Instead, he wrote, "A new approach to this matter suggests that the average resident of the United States is being subjected to severe chronic lead insult." He estimated that our "lead burden" was as much as 100 times that of our preindustrial ancestors—often to just below the threshold of acute toxicity.
Lead poisoning was known to the ancients. Vitruvius, designer of aqueducts for Julius Caesar, wrote in Book VIII of De Architectura that "water is much more wholesome from earthenware pipes than from lead pipes . . . [water] seems to be made injurious by lead." Lead accumulates in the body, where it can have profound effects on the central nervous system. Children exposed to high lead levels often acquire permanent learning disabilities and behavioral disorders.
When Patterson arrived at Caltech as a research fellow in geochemistry in 1952, he was looking not to save the world but to figure out how old it was. Doing so required him to measure the precise amounts of various isotopes of uranium and lead. (Isotopes are atoms of the same element that contain different numbers of neutrons in their nuclei.) Uranium-238 decays very, very slowly into lead-206, while uranium-235 decays less slowly into lead-207. Both rates are well known, so measuring the ratios of lead atoms to uranium ones shows how much uranium has disappeared and allows the sample's age to be calculated.
Patterson presumed that the inner solar system's rocky planets and meteorites had all coalesced at the same time, and that the meteorites had survived essentially unchanged ever since. Using an instrument called a mass spectrometer and working in a clean room he had designed and built himself, Patterson counted the individual lead atoms in a meteorite sample recovered from Canyon Diablo near Meteor Crater, Arizona. In a landmark paper published in 1956, he established Earth's age as 4.55 billion years.
However, there are four common isotopes of lead, and Patterson had to take them all into account in his calculations. He had announced his findings at a conference in 1955, and he had continued to refine his results as the paper worked its way through the review process. But there he hit a snag—his analytical skills had become so finely honed that he was finding lead everywhere. He needed to know the source of this contamination in order to eliminate it, and he took it on himself to find out.
Patterson's 1965 Environmental Health paper summarized that work. With M. Tatsumoto of the U.S. Geological Survey, he found that the ocean off of southern California was lead-laden at the surface but that the contamination disappeared rapidly with depth. They concluded that the likely culprit was tetraethyl lead, a widespread gasoline additive that emerged from the tailpipe of automobiles as very fine lead particles. Patterson and research fellow T. J. Chow crisscrossed the Pacific aboard research vessels run by the Scripps Institution of Oceanography at UC San Diego and found the same profile of lead levels versus depth. Then, in the winter of 1962–63, Patterson and Tatsumoto collected snow at an altitude of 7,000 feet on Mount Lassen in northern California. The lead contamination there was 10 to 100 times worse than at sea. Patterson concluded that it had fallen from the skies. Its isotopic fingerprint was a perfect match for air samples from Los Angeles—located 500 miles to the south. It also matched gasoline samples obtained by Chow in San Diego. Furthermore, the isotope fingerprint was different from that of lead found in prehistoric sediments off the California coast.
"The atmosphere of the northern hemisphere contains about 1,000 times more than natural amounts of lead," Patterson wrote, and he called for the "elimination of some of the most serious sources of lead pollution such as lead alkyls [i.e., tetraethyl lead], insecticides, food can solder, water service pipes, kitchenware glazes, and paints; and a reevaluation by persons in positions of responsibility in the field of public health of their role in the matter."
Patterson's paper was his first shot in the war against lead pollution, bureaucratic inertia, and big business that he would wage for the rest of his life. He won: the Clean Air Act of 1970 authorized the development of national air-quality standards, including emission controls on cars. In 1976, the Environmental Protection Agency reported that more than 100,000 tons of lead went into gasoline every month; by 1980 that figure would be less than 50,000 tons, and the concentration of lead in the average American's blood would drop by nearly 50 percent as well. The Consumer Product Safety Commission would ban lead-based indoor house paints in 1977 (flakes containing brightly colored lead pigments often found their way into children's mouths). And in 1986, the EPA prohibited tetraethyl lead in gasoline.
Good communication is crucial to any relationship, especially when partners are separated by distance. This also holds true for microbes in the deep sea that need to work together to consume large amounts of methane released from vents on the ocean floor. Recent work at Caltech has shown that these microbial partners can still accomplish this task, even when not in direct contact with one another, by using electrons to share energy over long distances.
This is the first time that direct interspecies electron transport—the movement of electrons from a cell, through the external environment, to another cell type—has been documented in microorganisms in nature.
The results were published in the September 16 issue of the journal Nature.
"Our lab is interested in microbial communities in the environment and, specifically, the symbiosis—or mutually beneficial relationship—between microorganisms that allows them to catalyze reactions they wouldn't be able to do on their own," says Professor of Geobiology Victoria Orphan, who led the recent study. For the last two decades, Orphan's lab has focused on the relationship between a species of bacteria and a species of archaea that live in symbiotic aggregates, or consortia, within deep-sea methane seeps. The organisms work together in syntrophy (which means "feeding together") to consume up to 80 percent of methane emitted from the ocean floor—methane that might otherwise end up contributing to climate change as a greenhouse gas in our atmosphere.
Previously, Orphan and her colleagues contributed to the discovery of this microbial symbiosis, a cooperative partnership between methane-oxidizing archaea called anaerobic methanotrophs (or "methane eaters") and a sulfate-reducing bacterium (organisms that can "breathe" sulfate instead of oxygen) that allows these organisms to consume methane using sulfate from seawater. However, it was unclear how these cells share energy and interact within the symbiosis to perform this task.
Because these microorganisms grow slowly (reproducing only four times per year) and live in close contact with each other, it has been difficult for researchers to isolate them from the environment to grow them in the lab. So, the Caltech team used a research submersible, called Alvin, to collect samples containing the methane-oxidizing microbial consortia from deep-ocean methane seep sediments and then brought them back to the laboratory for analysis.
The researchers used different fluorescent DNA stains to mark the two types of microbes and view their spatial orientation in consortia. In some consortia, Orphan and her colleagues found the bacterial and archaeal cells were well mixed, while in other consortia, cells of the same type were clustered into separate areas.
Orphan and her team wondered if the variation in the spatial organization of the bacteria and archaea within these consortia influenced their cellular activity and their ability to cooperatively consume methane. To find out, they applied a stable isotope "tracer" to evaluate the metabolic activity. The amount of the isotope taken up by individual archaeal and bacterial cells within their microbial "neighborhoods" in each consortia was then measured with a high-resolution instrument called nanoscale secondary ion mass spectrometry (nanoSIMS) at Caltech. This allowed the researchers to determine how active the archaeal and bacterial partners were relative to their distance to one another.
To their surprise, the researchers found that the spatial arrangement of the cells in consortia had no influence on their activity. "Since this is a syntrophic relationship, we would have thought the cells at the interface—where the bacteria are directly contacting the archaea—would be more active, but we don't really see an obvious trend. What is really notable is that there are cells that are many cell lengths away from their nearest partner that are still active," Orphan says.
To find out how the bacteria and archaea were partnering, co-first authors Grayson Chadwick (BS '11), a graduate student in geobiology at Caltech and a former undergraduate researcher in Orphan's lab, and Shawn McGlynn, a former postdoctoral scholar, employed spatial statistics to look for patterns in cellular activity for multiple consortia with different cell arrangements. They found that populations of syntrophic archaea and bacteria in consortia had similar levels of metabolic activity; when one population had high activity, the associated partner microorganisms were also equally active—consistent with a beneficial symbiosis. However, a close look at the spatial organization of the cells revealed that no particular arrangement of the two types of organisms—whether evenly dispersed or in separate groups—was correlated with a cell's activity.
To determine how these metabolic interactions were taking place even over relatively long distances, postdoctoral scholar and coauthor Chris Kempes, a visitor in computing and mathematical sciences, modeled the predicted relationship between cellular activity and distance between syntrophic partners that are dependent on the molecular diffusion of a substrate. He found that conventional metabolites—molecules previously predicted to be involved in this syntrophic consumption of methane—such as hydrogen—were inconsistent with the spatial activity patterns observed in the data. However, revised models indicated that electrons could likely make the trip from cell to cell across greater distances.
"Chris came up with a generalized model for the methane-oxidizing syntrophy based on direct electron transfer, and these model results were a better match to our empirical data," Orphan says. "This pointed to the possibility that these archaea were directly transferring electrons derived from methane to the outside of the cell, and those electrons were being passed to the bacteria directly."
Guided by this information, Chadwick and McGlynn looked for independent evidence to support the possibility of direct interspecies electron transfer. Cultured bacteria, such as those from the genus Geobacter, are model organisms for the direct electron transfer process. These bacteria use large proteins, called multi-heme cytochromes, on their outer surface that act as conductive "wires" for the transport of electrons.
Using genome analysis—along with transmission electron microscopy and a stain that reacts with these multi-heme cytochromes—the researchers showed that these conductive proteins were also present on the outer surface of the archaea they were studying. And that finding, Orphan says, can explain why the spatial arrangement of the syntrophic partners does not seem to affect their relationship or activity.
"It's really one of the first examples of direct interspecies electron transfer occurring between uncultured microorganisms in the environment. Our hunch is that this is going to be more common than is currently recognized," she says.
Orphan notes that the information they have learned about this relationship will help to expand how researchers think about interspecies microbial interactions in nature. In addition, the microscale stable isotope approach used in the current study can be used to evaluate interspecies electron transport and other forms of microbial symbiosis occurring in the environment.
In August 2015, more than 150 scientists interested in the exploration of Mars attended a conference at a hotel in Arcadia, California, to evaluate 21 potential landing sites for NASA's next Mars rover, a mission called Mars 2020. The design of that mission will be based on that of the Mars Science Laboratory (MSL), including the sky-crane landing system that helped put the rover, Curiosity, safely on martian soil.
Over the course of three days, the scientists heard presentations about the proposed sites and voted on the scientific merit of the locations. In the end, they arrived at a prioritized list of sites that offer the best opportunity for the mission to meet its objectives—including the search for signs of ancient life on the Red Planet and collecting and storing (or "caching") scientifically interesting samples for possible return to Earth.
We recently spoke with Ken Farley, the mission's project scientist and the W.M. Keck Foundation Professor of Geochemistry at Caltech, to talk about the workshop and how the Mars 2020 landing site selection process is shaping up.
Can you tell us a little bit about how these workshops help the project select a landing site?
We are using the same basic site selection process that has been used for previous Mars rovers. It involves heavy engagement from the scientific community because there are individual experts on specific sites who are not necessarily on the mission's science team.
We put out a call for proposals to suggest specific sites, and respondents presented at the workshop. We provided presenters with a one-page template on which to indicate the characteristics of their landing site—basic facts, like what minerals are present. This became a way to distill a presentation into something that you could evaluate objectively and relatively quickly. When people flashed these rubrics up at the end of their presentations, there was some interesting peer review going on in real time.
We went through all 21 sites, talking about what was at each location. In the end, we needed to boil down the input and get a sense of which sites the community was most interested in. So we used a scorecard that tied directly to the mission objectives; there were five criteria, and attendees were able indicate how well they felt each site met each requirement by voting either "low, " "medium, " or "high." Then we tallied up the votes.
You mentioned that the criteria on the scorecard were related to the objectives of the mission. What are those objectives?
We have four mission objectives. One is to prepare the way for human exploration of Mars. The rover will have a weather station and an instrument that converts atmospheric carbon dioxide into oxygen—it's called the in situ resource utilization (ISRU) payload. This is a way to make oxygen for both human consumption and, even more importantly, for propellant. In terms of the landing site process, this objective was not a driving factor because the ISRU and the weather station don't really care where they go.
And the other three objectives?
We call the three remaining objectives the "ABC" goals. A is to explore the landing site. That's a basic part of a geologic study—you look around and see what's there and try to understand the geologic processes that made it.
The B goal is to explore an "astrobiologically relevant environment," to look for rocks in habitable environments that have the ability to preserve biosignatures— evidence of past or present life—and then to look for biosignatures in those rocks. The phrase that NASA attaches to our mission is "Seeking the Signs of Life." We have a bunch of science instruments on the rover that will help us meet those objectives.
Then the C goal is to prepare a returnable cache of samples. The word "returnable" has a technical definition—the cache has to meet a bunch of criteria, and one is that it has to have enough scientific merit to return. Previous studies of what constitutes returnability have suggested we need a number of samples in the mid 30s—we use the number 37.
It may seem strange, but there is a reason for this strange number. Thirty-seven is the maximum number of samples that can be packed into a circular honeycomb inside one possible design of the sample return assembly.
The huge task for us is to be able to drill that many samples. We've learned from MSL that everything takes a long time. Driving takes a long time, drilling takes a long time. We have a very specific mandate that we have to be capable of collecting 20 samples in the prime mission. Collecting at least 20 samples will motivate what we do in designing the rover.
It also has motivated a lot of the discussion of landing sites. You've got to have targets you wish to drill that are close together, and they can't be a long drive from where you land. There also has to be diversity because you don't want 15 copies of the same sample.
After all of those factors were considered, what was the outcome of the voting?
What came out of it was an ordered list of eight sites. One interesting thing about that list was that the sites were divided roughly equally into two kinds—those that were crater lakes with deltas and those that we would broadly call hydrothermal sites. These are locations that the community believes are most likely to have ancient life in them and preserve the evidence of it.
It's easy to understand the deltas because if you look in the terrestrial environment, a delta is an excellent place to look for organic matter. The things that are living in the water above the delta and upstream are washed into the delta when they die. Then mud packs in on top and preserves that material.
What is interesting about hydrothermal systems?
A hydrothermal system is in some ways very appealing but in some ways risky. These are places where rocks are hot enough to heat water to extremely high temperatures. At hydrothermal vents on Earth's sea floor, you have these strange creatures that are essentially living off chemical energy from inside the planet. And, in fact, the oldest evidence for life on Earth may have been found in hydrothermal settings. The problem is these settings are precarious; when the water gets a little too hot, everything dies.
What is the heat source for the hydrothermal sites on Mars?
There are two important heat sources—one is impact and the other is volcanic. A whole collection of our top sites are in a region next to a giant impact crater, and when you look at those rocks, they have chemical and mineralogical characteristics that look like hydrothermal alteration.
A leading candidate of the volcanic type is a site in Gusev Crater called the Columbia Hills site, which the Spirit rover studied. The rover came across a silica deposit. At the time, scientists didn't really know what it was, but it is now thought that the silica is actually a product of volcanic activity called sinter. The presenter for the site showed pictures from Spirit of these little bits of sinter and then showed pictures of something that looks almost exactly the same from a geothermal field in Chile. It was a pretty compelling comparison. Then he went on to show that these environments on Earth are very conducive to life and that the little silica blobs preserve biosignatures well.
So although it would be an interesting decision to invest another mission in the same location, that site was favored because it's the only place where a mineral that might contain signs of ancient life is known to exist with certainty.
Do these two types of sites differ just in terms of their ancient environments?
No. It turns out that you can see most of the deltas from Mars's orbit because they are pretty much the last gasp of processing of the martian surface. They date to a period about 3.6 billion years ago when the planet transitioned from a warm, wet period to basically being desiccated. Some of the hydrothermal sites may have rocks that are in the 4-billion-year-old range. That age difference may not sound like much, but in terms of an evolving planet that is dying, it raises interesting questions. If you want to allow the maximum amount of time for life to have evolved, maybe you choose a delta site. On the other hand, you might say, "Mars is dying at that point," and you want to try to get samples that include a record from an earlier, more equable period.
Since the community is divided roughly evenly between these two types of sites, one of the important questions we will have to wrestle with until the next workshop (in early 2017) is, "Which of those kinds of sites is more promising?" We need to engage a bigger community to address this question.
What happened to the list generated from this workshop?
This workshop was almost exclusively about science. The mission's leadership and members of the Mars 2020 Landing Site Steering Committee, appointed by NASA, then took the information from the workshop, rolled it up with information that the project had generated on things like whether the sites could be landed on, and came up with a list of eight sites in alphabetic order:
NE Syrtis Major
SW Melas Chasma
What comes next?
Over the course of the coming year, the Mars 2020 engineering team will continue its study of the feasibility of the highly ranked landing sites. At the same time, the science team will dig deeply into what is known about each site, seeking to identify the sites that are best suited to meet the mission's science goals. I expect that advocates for specific sites will also continue doing their homework to make the strongest possible case for their preferred site. And in 2017, we'll do the workshop all over again!
Yuk Yung, the Smits Family Professor of Planetary Science, has received the 2015 Gerard P. Kuiper Prize from the American Astronomical Society's Division for Planetary Sciences. The prize, given for outstanding contributions to the field of planetary science, recognizes Yung's work on atmospheric photochemistry, global climate change, radiative transfer, atmospheric evolution, and planetary habitability.
"His unique integration of observations, laboratory data, and quantitative modeling has yielded pioneering insights into the characterization, origin, and evolution of atmospheres in the solar system," the award citation notes.
Yung joined the Caltech faculty in 1977. He is a fellow of the American Academy of Arts and Sciences and of the American Association for the Advancement of Science. A longtime collaborator with scientists at the Jet Propulsion Laboratory (JPL), Yung is a coinvestigator on the Ultraviolet Imaging Spectrometer Experiment on the Cassini mission to Saturn and on the Orbital Carbon Observatory-2, a project to map CO2 concentrations on Earth.
Previous recipients of the Kuiper Prize include Professor of Planetary Science Andrew Ingersoll; Peter Goldreich, the Lee A. DuBridge Professor of Astrophysics and Planetary Physics, Emeritus; and Eugene M. Shoemaker, Caltech alumnus (BS'47, MS '48) and former chair of the Division of Geological and Planetary Sciences.
On Friday, August 7, 104 female high school seniors and their families visited Caltech for the fourth annual Women in STEM (WiSTEM) Preview Day, hosted by the undergraduate admissions office. The event was designed to explore the accomplishments and continued contributions of Caltech women in the disciplines of science, technology, engineering, and mathematics (STEM).
The day opened with a keynote address by Marianne Bronner, the Albert Billings Ruddock Professor of Biology and executive officer for neurobiology. Bronner, who studies the development of the central nervous system, spoke about her experiences in science and at Caltech.
"Caltech is an exciting place to be. It's a place where you can be creative and think outside the box," she said. "My advice to you would be to try different things, play around, and do what makes you happy." Bronner ended her address by noting the pleasure she takes in mentoring young scientists, and especially young women. "I was just like you," she said.
Over the course of the day, students and their families attended panels on undergraduate research opportunities and participated in social events where current students shared their experiences of Caltech life. They also listened to presentations from female scientists and engineers of the Jet Propulsion Laboratory.
"I really love science, and it's so exciting to be around all of these other people who share that," says Sydney Feldman, a senior from Maryland. "I switched around my whole summer visit schedule to come to this event and I'm having such a great time."
The annual event began four years ago with the goal of encouraging interest in STEM in high school women and ultimately increasing applications to Caltech by female candidates. In 2009, a U.S. Department of Commerce study showed that women make up 24 percent of the STEM workforce and hold a disproportionately low share of undergraduate degrees in STEM fields.
"Women are seriously underrepresented in these fields," says Caltech admissions counselor and WiSTEM coordinator Abeni Tinubu. "Our event really puts emphasis on how Caltech supports women on campus, and we want to show prospective students that."
This year, the incoming freshman class is a record 47 percent female students. "This is hugely exciting," says Jarrid Whitney, the executive director of admissions and financial aid. "We've been working hard toward our goal of 50 percent women, and it is clearly paying off thanks to the support of President Rosenbaum and the overall Caltech community."
For more than 20 years, Caltech geologist Jean-Philippe Avouac has collaborated with the Department of Mines and Geology of Nepal to study the Himalayas—the most active, above-water mountain range on Earth—to learn more about the processes that build mountains and trigger earthquakes. Over that period, he and his colleagues have installed a network of GPS stations in Nepal that allows them to monitor the way Earth's crust moves during and in between earthquakes. So when he heard on April 25 that a magnitude 7.8 earthquake had struck near Gorkha, Nepal, not far from Kathmandu, he thought he knew what to expect—utter devastation throughout Kathmandu and a death toll in the hundreds of thousands.
"At first when I saw the news trickling in from Kathmandu, I thought there was a problem of communication, that we weren't hearing the full extent of the damage," says Avouac, Caltech's Earle C. Anthony Professor of Geology. "As it turns out, there was little damage to the regular dwellings, and thankfully, as a result, there were far fewer deaths than I originally anticipated."
Using data from the GPS stations, an accelerometer that measures ground motion in Kathmandu, data from seismological stations around the world, and radar images collected by orbiting satellites, an international team of scientists led by Caltech has pieced together the first complete account of what physically happened during the Gorkha earthquake—a picture that explains how the large earthquake wound up leaving the majority of low-story buildings unscathed while devastating some treasured taller structures.
The findings are described in two papers that now appear online. The first, in the journal Nature Geoscience, is based on an analysis of seismological records collected more than 1,000 kilometers from the epicenter and places the event in the context of what scientists knew of the seismic setting near Gorkha before the earthquake. The second paper, appearing in ScienceExpress, goes into finer detail about the rupture process during the April 25 earthquake and how it shook the ground in Kathmandu.
Build Up and Release of Strain on Himalaya Megathrust(caption and credit in video attached in upper right)
In the first study, the researchers show that the earthquake occurred on the Main Himalayan Thrust (MHT), the main megathrust fault along which northern India is pushing beneath Eurasia at a rate of about two centimeters per year, driving the Himalayas upward. Based on GPS measurements, scientists know that a large portion of this fault is "locked." Large earthquakes typically release stress on such locked faults—as the lower tectonic plate (here, the Indian plate) pulls the upper plate (here, the Eurasian plate) downward, strain builds in these locked sections until the upper plate breaks free, releasing strain and producing an earthquake. There are areas along the fault in western Nepal that are known to be locked and have not experienced a major earthquake since a big one (larger than magnitude 8.5) in 1505. But the Gorkha earthquake ruptured only a small fraction of the locked zone, so there is still the potential for the locked portion to produce a large earthquake.
"The Gorkha earthquake didn't do the job of transferring deformation all the way to the front of the Himalaya," says Avouac. "So the Himalaya could certainly generate larger earthquakes in the future, but we have no idea when."
The epicenter of the April 25 event was located in the Gorkha District of Nepal, 75 kilometers to the west-northwest of Kathmandu, and propagated eastward at a rate of about 2.8 kilometers per second, causing slip in the north-south direction—a progression that the researchers describe as "unzipping" a section of the locked fault.
"With the geological context in Nepal, this is a place where we expect big earthquakes. We also knew, based on GPS measurements of the way the plates have moved over the last two decades, how 'stuck' this particular fault was, so this earthquake was not a surprise," says Jean Paul Ampuero, assistant professor of seismology at Caltech and coauthor on the Nature Geoscience paper. "But with every earthquake there are always surprises."
Propagation of April 2015 Mw 7.8 Gorkha Earthquake(caption and credit in video attached in upper right)
In this case, one of the surprises was that the quake did not rupture all the way to the surface. Records of past earthquakes on the same fault—including a powerful one (possibly as strong as magnitude 8.4) that shook Kathmandu in 1934—indicate that ruptures have previously reached the surface. But Avouac, Ampuero, and their colleagues used satellite Synthetic Aperture Radar data and a technique called back projection that takes advantage of the dense arrays of seismic stations in the United States, Europe, and Australia to track the progression of the earthquake, and found that it was quite contained at depth. The high-frequency waves that were largely produced in the lower section of the rupture occurred at a depth of about 15 kilometers.
"That was good news for Kathmandu," says Ampuero. "If the earthquake had broken all the way to the surface, it could have been much, much worse."
The researchers note, however, that the Gorkha earthquake did increase the stress on the adjacent portion of the fault that remains locked, closer to Kathmandu. It is unclear whether this additional stress will eventually trigger another earthquake or if that portion of the fault will "creep," a process that allows the two plates to move slowly past one another, dissipating stress. The researchers are building computer models and monitoring post-earthquake deformation of the crust to try to determine which scenario is more likely.
Another surprise from the earthquake, one that explains why many of the homes and other buildings in Kathmandu were spared, is described in the Science Express paper. Avouac and his colleagues found that for such a large-magnitude earthquake, high-frequency shaking in Kathmandu was actually relatively mild. And it is high-frequency waves, with short periods of vibration of less than one second, that tend to affect low-story buildings. The Nature Geoscience paper showed that the high-frequency waves that the quake produced came from the deeper edge of the rupture, on the northern end away from Kathmandu.
The GPS records described in the ScienceExpress paper show that within the zone that experienced the greatest amount of slip during the earthquake—a region south of the sources of high-frequency waves and closer to Kathmandu—the onset of slip on the fault was actually very smooth. It took nearly two seconds for the slip rate to reach its maximum value of one meter per second. In general, the more abrupt the onset of slip during an earthquake, the more energetic the radiated high-frequency seismic waves. So the relatively gradual onset of slip in the Gorkha event explains why this patch, which experienced a large amount of slip, did not generate many high-frequency waves.
"It would be good news if the smooth onset of slip, and hence the limited induced shaking, were a systematic property of the Himalayan megathrust fault, or of megathrust faults in general." says Avouac. "Based on observations from this and other megathrust earthquakes, this is a possibility."
In contrast to what they saw with high-frequency waves, the researchers found that the earthquake produced an unexpectedly large amount of low-frequency waves with longer periods of about five seconds. This longer-period shaking was responsible for the collapse of taller structures in Kathmandu, such as the Dharahara Tower, a 60-meter-high tower that survived larger earthquakes in 1833 and 1934 but collapsed completely during the Gorkha quake.
To understand this, consider plucking the strings of a guitar. Each string resonates at a certain natural frequency, or pitch, depending on the length, composition, and tension of the string. Likewise, buildings and other structures have a natural pitch or frequency of shaking at which they resonate; in general, the taller the building, the longer the period at which it resonates. If a strong earthquake causes the ground to shake with a frequency that matches a building's pitch, the shaking will be amplified within the building, and the structure will likely collapse.
Turning to the GPS records from two of Avouac's stations in the Kathmandu Valley, the researchers found that the effect of the low-frequency waves was amplified by the geological context of the Kathmandu basin. The basin is an ancient lakebed that is now filled with relatively soft sediment. For about 40 seconds after the earthquake, seismic waves from the quake were trapped within the basin and continued to reverberate, ringing like a bell with a frequency of five seconds.
"That's just the right frequency to damage tall buildings like the Dharahara Tower because it's close to their natural period," Avouac explains.
In follow-up work, Domniki Asimaki, professor of mechanical and civil engineering at Caltech, is examining the details of the shaking experienced throughout the basin. On a recent trip to Kathmandu, she documented very little damage to low-story buildings throughout much of the city but identified a pattern of intense shaking experienced at the edges of the basin, on hilltops or in the foothills where sediment meets the mountains. This was largely due to the resonance of seismic waves within the basin.
Asimaki notes that Los Angeles is also built atop sedimentary deposits and is surrounded by hills and mountain ranges that would also be prone to this type of increased shaking intensity during a major earthquake.
"In fact," she says, "the buildings in downtown Los Angeles are much taller than those in Kathmandu and therefore resonate with a much lower frequency. So if the same shaking had happened in L.A., a lot of the really tall buildings would have been challenged."
That points to one of the reasons it is important to understand how the land responded to the Gorkha earthquake, Avouac says. "Such studies of the site effects in Nepal provide an important opportunity to validate the codes and methods we use to predict the kind of shaking and damage that would be expected as a result of earthquakes elsewhere, such as in the Los Angeles Basin."
The Nepal Geodetic Array was funded by Caltech, the Gordon and Betty Moore Foundation, and the National Science Foundation. Additional funding for the Science study came from the Department of Foreign International Development (UK), the Royal Society (UK), the United Nations Development Programme, and the Nepal Academy for Science and Technology, as well as NASA and the Department of Foreign International Development.
A few seconds may not seem like long, but it is enough time to turn off a stove, open an elevator door, or take cover under a desk. And before an earthquake strikes, a few seconds of warning can save lives. The U.S. Geological Survey aims to provide those seconds of warning with ShakeAlert, an earthquake early-warning system now being tested on the west coast of the United States. On July 30, the USGS announced approximately $4 million in awards to Caltech, UC Berkeley, the University of Washington and the University of Oregon, for the expansion and improvement of the ShakeAlert system.
"Caltech's role in ShakeAlert will focus on research and development of the system so that future versions will be faster and more reliable," says Thomas Heaton (PhD '78), professor of engineering seismology and director of Caltech's Earthquake Engineering Research Laboratory. "We currently collect data from approximately 400 seismic stations throughout California. The USGS grant will allow Caltech to upgrade or install new stations in strategic locations that will significantly improve the performance of ShakeAlert."
Earthquakes radiate two kinds of seismic waves: fast-moving and often harmless P-waves, followed by S-waves, which can cause strong ground shaking. A system of seismometers called the California Integrated Seismic Network (CISN) acquires data streams literally at the speed of light and uses several algorithms to quickly pinpoint the earthquake's epicenter and determine its strength. ShakeAlert analyzes the first P-waves in the CISN data streams to send out digital alerts, providing the "early warning" to a region before the slower, destructive S-waves arrive.
While predicting when and where an earthquake will occur is impossible, this early-warning system can give necessary seconds of preparation. Current beta-test users receive these alerts as a pop-up on their computers, displaying a map of the affected region, the amount of time until shaking begins, the estimated magnitude of the quake, and other data. In the future, alerts may be available through text messages and phone apps.
Though still technically in testing stages, ShakeAlert has already provided successful warnings. In August 2014, the system provided a nine-second warning to the city of San Francisco during a magnitude 6.0 earthquake in South Napa. In May, during a magnitude 3.8 quake in Los Angeles, an alert was issued before S-waves had even reached the earth's surface.
"With this new USGS funding, we will be able to add 20 new sensors to CISN, making coverage more robust and thus lengthening warning times," says Egill Hauksson, a research professor of geophysics and a principal investigator along with Heaton on the ShakeAlert project. "Caltech and its partners will be able to continue the high-quality seismological research that is such a necessary foundation for a reliable earthquake early-warning system."
In 2011, Caltech, along with UC Berkeley and the University of Washington, Seattle, received $6 million from the Gordon and Betty Moore Foundation for the research and development of ShakeAlert.
Trustees Gordon (PhD '54) and Betty Moore have pledged $100 million to Caltech, the second-largest single contribution in the Institute's history. With this gift, they have created a permanent endowment and entrusted the choice of how to direct the funds to the Institute's leadership—providing lasting resources coupled with uncommon freedom.
"Those within the Institute have a much better view of what the highest priorities are than we could have," Intel Corporation cofounder Gordon Moore explains. "We'd rather turn the job of deciding where to use resources over to Caltech than try to dictate it from outside."
Applying the Moores' donation in a way that will strengthen the Institute for generations to come, Caltech's president and provost have decided to dedicate the funds to fellowships for graduate students.
"Gordon and Betty Moore's incredibly generous gift will have a transformative effect on Caltech," says President Thomas F. Rosenbaum, holder of the Institute's Sonja and William Davidow Presidential Chair and professor of physics. "Our ultimate goal is to provide fellowships for every graduate student at Caltech, to free these remarkable young scholars to pursue their interests wherever they may lead, independent of the vicissitudes of federal funding. The fellowships created by the Moores' gift will help make the Institute the destination of choice for the most original and creative scholars, students and faculty members alike."
Further multiplying the impact of the Moores' contribution, the Institute has established a program that will inspire others to contribute as well. The Gordon and Betty Moore Graduate Fellowship Match will provide one additional dollar for every two dollars pledged to endow Institute-wide fellowships. In this way, the Moores' $100 million commitment will increase fellowship support for Caltech by a total of $300 million.
Says Provost Edward M. Stolper, the Carl and Shirley Larson Provostial Chair and William E. Leonhard Professor of Geology: "Investigators across campus work with outstanding graduate students to advance discovery and to train the next generation of teachers and researchers. By supporting these students, the Moore Match will stimulate creativity and excellence in perpetuity all across Caltech. We are grateful to Gordon and Betty for allowing us the flexibility to devote their gift to this crucial priority."
The Moores describe Caltech as a one-of-a-kind institution in its ability to train budding scientists and engineers and conduct high-risk research with world-changing results—and they are committed to helping the Institute maintain that ability far into the future.
"We appreciate being able to support the best science," Gordon Moore says, "and that's something that supporting Caltech lets us do."
The couple's extraordinary philanthropy already has motivated other benefactors to follow their example, notes David L. Lee, chair of the Caltech Board of Trustees.
"The decision that Gordon and Betty made—to give such a remarkable gift, to make it perpetual through an endowment, and to remove any restrictions as to how it can be used—creates a tremendous ripple effect," Lee says. "Others have seen the Moores' confidence in Caltech and have made commitments of their own. We thank the Moores for their leadership."
The Moores consider their gift a high-leverage way of fostering scientific research at a place that is close to their hearts. Before he went on to cofound Intel, Gordon Moore earned a PhD in chemistry from Caltech.
"It's been a long-term association that has served me well," he says.
Joining him in Pasadena just a day after the two were married, Betty Moore became active in the campus community as well. A graduate of San Jose State College's journalism program, she secured a job at the Ford Foundation's new Pasadena headquarters and also made time to come to campus to participate in community activities, including the Chem Wives social club.
"We started out at Caltech," she recalls. "I had a feeling that it was home away from home. It gives you a down-home feeling when you're young and just taking off from family. You need that connection somehow."
After earning his PhD from Caltech in 1954, Gordon Moore took a position conducting basic research at the Applied Physics Laboratory at Johns Hopkins University. Fourteen years and two jobs later, he and his colleague Robert Noyce cofounded Intel Corp. Moore served as executive vice president of the company until 1975, when he took the helm. Under his leadership—as chief executive officer (1975 to 1987) and chairman of the board (1987 to 1997)—Intel grew from a Mountain View-based startup to a giant of Silicon Valley, worth more than $140 billion today.
Moore is widely known for "Moore's Law," his 1965 prediction that the number of transistors that can fit on a chip would double every year. Still relevant 50 years later, this principle pushed Moore and his company—and the tech industry as a whole—to produce continually more powerful and cheaper semiconductor chips.
Gordon Moore joined the Caltech Board of Trustees in 1983 and served as chair from 1993 to 2000. That same year, he and his wife established the Gordon and Betty Moore Foundation, an organization dedicated to creating positive outcomes for future generations in the San Francisco Bay Area and around the world.
Among numerous other honors, Gordon Moore is a member of the National Academy of Engineering, a fellow of the Institute of Electrical and Electronics Engineers, and a recipient of the National Medal of Technology and the Presidential Medal of Freedom.
The Gordon and Betty Moore Graduate Fellowship Match is available for new gifts and pledges to endow graduate fellowships. For more information about the match and how to support graduate education at Caltech, please email firstname.lastname@example.org or call (626) 395-4863.
July 14 marks 50 years of visual reconnaissance of the solar system by NASA's Jet Propulsion Laboratory (JPL), beginning with Mariner 4's flyby of Mars in 1965.
Among JPL's first planetary efforts, Mariners 3 and 4 (known collectively as "Mariner Mars") were planned and executed by a group of pioneering scientists at Caltech in partnership with JPL. NASA was only 4 years old when the first Mars flyby was approved in 1962, but the core science team had been working together at Caltech for many years. The team included Caltech faculty Robert Sharp (after whom Mount Sharp, the main target of the Mars rover Curiosity, is named) and Gerry Neugebauer, professor of geology and of professor of physics, respectively; Robert Leighton and H. Victor Neher, professors of physics; and Bill Pickering, professor of electrical engineering, who was the director of JPL from 1954–1976. Rounding out the Caltech contingent was a young Bruce Murray, a new addition to the geology faculty, who would follow Pickering as JPL director in 1976.
"The Mariner missions marked the beginning of planetary geology, led by researchers at Caltech including Bruce Murray and Robert Sharp," said John Grotzinger, the Fletcher Jones Professor of Geology and chair of the Division of Geological and Planetary Sciences. "These early flyby missions showed the enormous potential of Mars to provide insight into the evolution of a close cousin to Earth and stimulated the creation of a program dedicated to iterative exploration involving orbiters, landers, and rovers."
By today's standards, Mariner Mars was a virtual leap into the unknown. NASA and JPL had little spaceflight experience to guide them. There had been just one successful planetary mission—Mariner 2's journey past Venus in 1962—to build upon. Sending spacecraft to other planets was still a new endeavor.
The Mariner Mars spacecraft were originally designed without cameras. Neugebauer, Murray, and Leighton felt that a lot of science questions could be answered via images from this close encounter with Mars. As it turned out, sending back photos of the planet that had so long captured the imaginations of millions had the added benefit of making the Mars flyby more accessible to the public.
Mariner 3 launched on November 5, 1964. The Atlas rocket that boosted it clear of the atmosphere functioned perfectly (not always the case in the early years of spaceflight), but the shroud enclosing the payload failed to fully open and the spacecraft, unable to collect sunlight on its solar panels, ceased to function after about nine hours of flight.
Mariner 4 launched three weeks later on November 28 with a redesigned shroud. The probe deployed as planned and began its journey to Mars. But there was still drama in store for the mission. Within the first hour of the flight, the rocket's upper stage had pushed the spacecraft out of Earth orbit, and the solar panels had deployed. Then the guidance system acquired a lock on the sun, but a second object was needed to guide the spacecraft. This depended on a photocell finding the bright star Canopus, which was attempted about 15 hours later. During these first attempts, however, the primitive onboard electronics erroneously identified other stars of similar brightness.
Controllers managed to solve this problem but over the next few weeks realized that a small cloud of dust and paint flecks, ejected when Mariner 4 deployed, was traveling along with the spacecraft and interfering with the tracking of Canopus. A tiny paint chip, if close enough to the star tracker, could mimic the star. After more corrective action, Canopus was reacquired and Mariner's journey continued largely without incident. This star-tracking technology, along with many other design features of the spacecraft, has been used in every interplanetary mission JPL has flown since.
At the time, what was known about Mars had been learned from Earth-based telescopes. The images were fuzzy and indistinct—at its closest, Mars is still about 35 million miles distant. Scientific measurements derived from visual observations of the planet were inexact. While ideas about the true nature of Mars evolved throughout the first half of the 20th century, in 1965 nobody could say with any confidence how dense the martian atmosphere was or determine its exact composition. Telescopic surveys had recorded a visual event called the "wave of darkening," which some scientists theorized could be plant life blooming and perishing as the harsh martian seasons changed. A few of them still thought of Mars as a place capable of supporting advanced life, although most thought it unlikely. However, there was no conclusive evidence for either scenario.
So, as Mariner 4 flew past Mars, much was at stake, both for the scientific community and a curious general public. Were there canals or channels on the surface, as some astronomers had reported? Would we find advanced life forms or vast collections of plant life? Would there be liquid water on the surface?
Just over seven months after launch, the encounter with Mars was imminent. On July 14, 1965, Mariner's science instruments were activated. These included a magnetometer to measure magnetic fields, a Geiger counter to measure radiation, a cosmic ray telescope, a cosmic dust detector, and the television camera.
About seven hours before the encounter, the TV camera began acquiring images. After the probe passed Mars, an onboard data recorder—which used a 330-foot endless loop of magnetic tape to store still pictures—initiated playback of the raw images to Earth, transmitting them twice for certainty. Each image took 10 hours to transmit.
The 22 images sent by Mariner 4 appeared conclusive. Although they were low-resolution and black-and-white, they indicated that Mars was not a place likely to be friendly to life. It was a cold, dry desert, covered with so many craters as to strongly resemble Earth's moon. The atmospheric density was about one-thousandth that of Earth, and no liquid water was apparent on the surface.
When discussing the mission during an interview at Caltech in 1977, Leighton recalled viewing the first images at JPL. "If someone had asked 'What do you expect to see?' we would have said 'craters'…[yet] the fact that craters were there, and a predominant land form, was somehow surprising."
Leighton also recalled a letter he received from, of all people, a dairy farmer. It read, "I'm not very close to your world, but I really appreciate what you are doing. Keep it going." Leighton said of the sentiment, "A letter from a milkman…I thought that was kind of nice."
After its voyage past Mars, Mariner 4 maintained intermittent communication with JPL and returned data about the interplanetary environment for two more years. But by the end of 1967, the spacecraft had suffered tens of thousands of micrometeoroid impacts and was out of the nitrogen gas it used for maneuvering. The mission officially ended on December 21.
"Mariner 4 defined and pioneered the systems and technologies needed for a truly interplanetary spacecraft," says Rob Manning (BS '81), JPL's chief engineer for the Low-Density Supersonic Decelerator and formerly chief engineer for the Mars Science Laboratory. "All U.S. interplanetary missions that have followed were directly derived from the architecture and innovations that engineers behind Mariner invented. We stand on the shoulders of giants."
Joseph Shepherd (PhD '81), the C. L. "Kelly" Johnson Professor of Aeronautics and professor of mechanical engineering, is leaving his post as dean of graduate studies to succeed Anneila Sargent (MS '67, PhD '78), the Ira S. Bowen Professor of Astronomy, as vice president for student affairs. Shepherd's new role is effective September 15.
Sargent, who served the campus as the leader of student affairs the last eight years, announced in March that she was leaving the post to return to research and teaching full time. Shepherd, who joined the Caltech faculty in 1993, has served the last six years as the dean of graduate studies.
We recently sat down with Shepherd to talk about his past role and his new one, his strengths and goals, and his experience at Caltech.
Q: What does the vice president for student affairs do?
A: Student Affairs includes the offices of the undergraduate and graduate deans as well as obvious things like the registrar, undergraduate admissions, fellowships and study abroad, the career center, the health center, and the counseling center. It also includes things you might not think of—athletics; performing and visual arts, which includes the music programs, the theater program, the various arts programs, and all of the faculty and instructors that make these programs possible; and a whole group of organizations lumped under "auxiliaries."
The term "auxiliaries" is misleading, because they're central to student life. Housing and dining are the biggest parts, but there are services like the C-Store, the Red Door Café, the Caltech Store and Wired.
Q: What makes this role exciting for you?
A: People speculate about what it is that makes Caltech a great school. A lot of folks say, "Well, it's because it's so small." But I think it's also because we work with people instead of creating some bureaucratic mechanism to solve problems. We say, "All right, what's the issue here? How can we resolve this?" instead of, "We need to create a rule. And then we need to create a group to enforce the rule." My approach is to ask, "What do we want the outcome to be?" In Student Affairs, you want the outcome to be something that supports the students, supports the faculty, and then you make sure that it's not going to adversely affect the Institute.
Q: Are there any changes coming, any initiatives you want to establish?
A: We need to think about how we build on the strengths we have and improve the things that we're weakest at. Before you make any changes to an organization, you need to understand those two things. There are a lot of parts to Student Affairs, so I need to understand the strong points of those organizations, and then get them to help me formulate what's important to do.
You always have to be careful of unintended consequences. As they say in chess, you want to think several moves deep. All right, suppose we do that. What will it mean for different parts of our population? Do we make this choice based on the data we have, or do we need more data? Will there be effects on people we haven't thought about? Maybe we need to go talk to those people.
When you have the authority to change things, you also have the responsibility to ask, "Are these the right changes?" Nothing happens in isolation. Anything you do is invariably going to wind up touching quite a few people.
Q: You've been dean of graduate studies since 2009. Did you consider taking a breather before jumping into this?
A: Well, much to my surprise, I found that being the dean of graduate studies was rewarding in many different ways. Sometimes you had to do some difficult things, but I actually liked being the dean. I was able, to some extent, to continue my research. I did some teaching—although last year I taught a major course all three terms, and I had my research group—and I was the dean of graduate studies. That taught me a lesson: a man's got to know his limitations.
So when I was asked if I would take this position, I did think about taking a break and not doing it. I enjoy my research and I enjoy teaching. I enjoy working with students, but I also enjoy trying to help the Institute as a whole. Here at Caltech, we pride ourselves on the notion that we have this very special environment. We have this small school, and we have dedicated professionals that work together with faculty to nurture that environment—having faculty who are invested in participating in the key administrative roles is essential.
When I was a graduate student here, my adviser was Brad Sturtevant [MS '56, PhD '60, and a lifelong faculty member thereafter]. Brad was the executive officer for aeronautics [1972-76]. He was in charge of the committee that built the Sherman Fairchild Library and he was on the faculty board. He emphasized to me that being involved in administration was just as valuable as all the other aspects of being a faculty member. He was a dedicated researcher, but he also felt strongly that you should be a good citizen. You should contribute.
Q: It seems like this is more than just a duty to you, though.
A: I'm looking forward to it. I'm also very conscious of the responsibility. I think it's going to be important for us all to think about how we maintain the excellence of the Institute and that we imagine how this place is going to evolve. As society evolves around us, we will naturally wind up changing. We need to do that in a thoughtful way so that we continue to be the special organization that we are.
At the end of the day, I'm counting on help from the faculty and staff. Caltech works because of the committed individuals within our organizations, the personal connections we form as we work together and the cooperation across the campus that these connections enable. It's a collective enterprise.
I think administration is not something that's done to people. It's being responsible for making sure that folks have the right work environment, the right job assignments, and the right resources. It's making sure we're doing the right things with the finite resources we have. One of our former presidents said something that's always stuck with me: an administrator's goals are not about their own career so much as helping the careers of others. You need to think about how you're helping the people working for you, because they have goals and aspirations. That's where you take your satisfaction.