Caltech geochemist Clair Patterson (1922–1995) helped galvanize the environmental movement 50 years ago when he announced that highly toxic lead could be found essentially everywhere on Earth, including in our own bodies—and that very little of it was due to natural causes.
In a paper published in the September 1965 issue of Archives of Environmental Health, Patterson challenged the prevailing belief that industrial and natural sources contributed roughly equal amounts of ingestible lead, and that the aggregate level we absorbed was safe. Instead, he wrote, "A new approach to this matter suggests that the average resident of the United States is being subjected to severe chronic lead insult." He estimated that our "lead burden" was as much as 100 times that of our preindustrial ancestors—often to just below the threshold of acute toxicity.
Lead poisoning was known to the ancients. Vitruvius, designer of aqueducts for Julius Caesar, wrote in Book VIII of De Architectura that "water is much more wholesome from earthenware pipes than from lead pipes . . . [water] seems to be made injurious by lead." Lead accumulates in the body, where it can have profound effects on the central nervous system. Children exposed to high lead levels often acquire permanent learning disabilities and behavioral disorders.
When Patterson arrived at Caltech as a research fellow in geochemistry in 1952, he was looking not to save the world but to figure out how old it was. Doing so required him to measure the precise amounts of various isotopes of uranium and lead. (Isotopes are atoms of the same element that contain different numbers of neutrons in their nuclei.) Uranium-238 decays very, very slowly into lead-206, while uranium-235 decays less slowly into lead-207. Both rates are well known, so measuring the ratios of lead atoms to uranium ones shows how much uranium has disappeared and allows the sample's age to be calculated.
Patterson presumed that the inner solar system's rocky planets and meteorites had all coalesced at the same time, and that the meteorites had survived essentially unchanged ever since. Using an instrument called a mass spectrometer and working in a clean room he had designed and built himself, Patterson counted the individual lead atoms in a meteorite sample recovered from Canyon Diablo near Meteor Crater, Arizona. In a landmark paper published in 1956, he established Earth's age as 4.55 billion years.
However, there are four common isotopes of lead, and Patterson had to take them all into account in his calculations. He had announced his findings at a conference in 1955, and he had continued to refine his results as the paper worked its way through the review process. But there he hit a snag—his analytical skills had become so finely honed that he was finding lead everywhere. He needed to know the source of this contamination in order to eliminate it, and he took it on himself to find out.
Patterson's 1965 Environmental Health paper summarized that work. With M. Tatsumoto of the U.S. Geological Survey, he found that the ocean off of southern California was lead-laden at the surface but that the contamination disappeared rapidly with depth. They concluded that the likely culprit was tetraethyl lead, a widespread gasoline additive that emerged from the tailpipe of automobiles as very fine lead particles. Patterson and research fellow T. J. Chow crisscrossed the Pacific aboard research vessels run by the Scripps Institution of Oceanography at UC San Diego and found the same profile of lead levels versus depth. Then, in the winter of 1962–63, Patterson and Tatsumoto collected snow at an altitude of 7,000 feet on Mount Lassen in northern California. The lead contamination there was 10 to 100 times worse than at sea. Patterson concluded that it had fallen from the skies. Its isotopic fingerprint was a perfect match for air samples from Los Angeles—located 500 miles to the south. It also matched gasoline samples obtained by Chow in San Diego. Furthermore, the isotope fingerprint was different from that of lead found in prehistoric sediments off the California coast.
"The atmosphere of the northern hemisphere contains about 1,000 times more than natural amounts of lead," Patterson wrote, and he called for the "elimination of some of the most serious sources of lead pollution such as lead alkyls [i.e., tetraethyl lead], insecticides, food can solder, water service pipes, kitchenware glazes, and paints; and a reevaluation by persons in positions of responsibility in the field of public health of their role in the matter."
Patterson's paper was his first shot in the war against lead pollution, bureaucratic inertia, and big business that he would wage for the rest of his life. He won: the Clean Air Act of 1970 authorized the development of national air-quality standards, including emission controls on cars. In 1976, the Environmental Protection Agency reported that more than 100,000 tons of lead went into gasoline every month; by 1980 that figure would be less than 50,000 tons, and the concentration of lead in the average American's blood would drop by nearly 50 percent as well. The Consumer Product Safety Commission would ban lead-based indoor house paints in 1977 (flakes containing brightly colored lead pigments often found their way into children's mouths). And in 1986, the EPA prohibited tetraethyl lead in gasoline.
Good communication is crucial to any relationship, especially when partners are separated by distance. This also holds true for microbes in the deep sea that need to work together to consume large amounts of methane released from vents on the ocean floor. Recent work at Caltech has shown that these microbial partners can still accomplish this task, even when not in direct contact with one another, by using electrons to share energy over long distances.
This is the first time that direct interspecies electron transport—the movement of electrons from a cell, through the external environment, to another cell type—has been documented in microorganisms in nature.
The results were published in the September 16 issue of the journal Nature.
"Our lab is interested in microbial communities in the environment and, specifically, the symbiosis—or mutually beneficial relationship—between microorganisms that allows them to catalyze reactions they wouldn't be able to do on their own," says Professor of Geobiology Victoria Orphan, who led the recent study. For the last two decades, Orphan's lab has focused on the relationship between a species of bacteria and a species of archaea that live in symbiotic aggregates, or consortia, within deep-sea methane seeps. The organisms work together in syntrophy (which means "feeding together") to consume up to 80 percent of methane emitted from the ocean floor—methane that might otherwise end up contributing to climate change as a greenhouse gas in our atmosphere.
Previously, Orphan and her colleagues contributed to the discovery of this microbial symbiosis, a cooperative partnership between methane-oxidizing archaea called anaerobic methanotrophs (or "methane eaters") and a sulfate-reducing bacterium (organisms that can "breathe" sulfate instead of oxygen) that allows these organisms to consume methane using sulfate from seawater. However, it was unclear how these cells share energy and interact within the symbiosis to perform this task.
Because these microorganisms grow slowly (reproducing only four times per year) and live in close contact with each other, it has been difficult for researchers to isolate them from the environment to grow them in the lab. So, the Caltech team used a research submersible, called Alvin, to collect samples containing the methane-oxidizing microbial consortia from deep-ocean methane seep sediments and then brought them back to the laboratory for analysis.
The researchers used different fluorescent DNA stains to mark the two types of microbes and view their spatial orientation in consortia. In some consortia, Orphan and her colleagues found the bacterial and archaeal cells were well mixed, while in other consortia, cells of the same type were clustered into separate areas.
Orphan and her team wondered if the variation in the spatial organization of the bacteria and archaea within these consortia influenced their cellular activity and their ability to cooperatively consume methane. To find out, they applied a stable isotope "tracer" to evaluate the metabolic activity. The amount of the isotope taken up by individual archaeal and bacterial cells within their microbial "neighborhoods" in each consortia was then measured with a high-resolution instrument called nanoscale secondary ion mass spectrometry (nanoSIMS) at Caltech. This allowed the researchers to determine how active the archaeal and bacterial partners were relative to their distance to one another.
To their surprise, the researchers found that the spatial arrangement of the cells in consortia had no influence on their activity. "Since this is a syntrophic relationship, we would have thought the cells at the interface—where the bacteria are directly contacting the archaea—would be more active, but we don't really see an obvious trend. What is really notable is that there are cells that are many cell lengths away from their nearest partner that are still active," Orphan says.
To find out how the bacteria and archaea were partnering, co-first authors Grayson Chadwick (BS '11), a graduate student in geobiology at Caltech and a former undergraduate researcher in Orphan's lab, and Shawn McGlynn, a former postdoctoral scholar, employed spatial statistics to look for patterns in cellular activity for multiple consortia with different cell arrangements. They found that populations of syntrophic archaea and bacteria in consortia had similar levels of metabolic activity; when one population had high activity, the associated partner microorganisms were also equally active—consistent with a beneficial symbiosis. However, a close look at the spatial organization of the cells revealed that no particular arrangement of the two types of organisms—whether evenly dispersed or in separate groups—was correlated with a cell's activity.
To determine how these metabolic interactions were taking place even over relatively long distances, postdoctoral scholar and coauthor Chris Kempes, a visitor in computing and mathematical sciences, modeled the predicted relationship between cellular activity and distance between syntrophic partners that are dependent on the molecular diffusion of a substrate. He found that conventional metabolites—molecules previously predicted to be involved in this syntrophic consumption of methane—such as hydrogen—were inconsistent with the spatial activity patterns observed in the data. However, revised models indicated that electrons could likely make the trip from cell to cell across greater distances.
"Chris came up with a generalized model for the methane-oxidizing syntrophy based on direct electron transfer, and these model results were a better match to our empirical data," Orphan says. "This pointed to the possibility that these archaea were directly transferring electrons derived from methane to the outside of the cell, and those electrons were being passed to the bacteria directly."
Guided by this information, Chadwick and McGlynn looked for independent evidence to support the possibility of direct interspecies electron transfer. Cultured bacteria, such as those from the genus Geobacter, are model organisms for the direct electron transfer process. These bacteria use large proteins, called multi-heme cytochromes, on their outer surface that act as conductive "wires" for the transport of electrons.
Using genome analysis—along with transmission electron microscopy and a stain that reacts with these multi-heme cytochromes—the researchers showed that these conductive proteins were also present on the outer surface of the archaea they were studying. And that finding, Orphan says, can explain why the spatial arrangement of the syntrophic partners does not seem to affect their relationship or activity.
"It's really one of the first examples of direct interspecies electron transfer occurring between uncultured microorganisms in the environment. Our hunch is that this is going to be more common than is currently recognized," she says.
Orphan notes that the information they have learned about this relationship will help to expand how researchers think about interspecies microbial interactions in nature. In addition, the microscale stable isotope approach used in the current study can be used to evaluate interspecies electron transport and other forms of microbial symbiosis occurring in the environment.
In August 2015, more than 150 scientists interested in the exploration of Mars attended a conference at a hotel in Arcadia, California, to evaluate 21 potential landing sites for NASA's next Mars rover, a mission called Mars 2020. The design of that mission will be based on that of the Mars Science Laboratory (MSL), including the sky-crane landing system that helped put the rover, Curiosity, safely on martian soil.
Over the course of three days, the scientists heard presentations about the proposed sites and voted on the scientific merit of the locations. In the end, they arrived at a prioritized list of sites that offer the best opportunity for the mission to meet its objectives—including the search for signs of ancient life on the Red Planet and collecting and storing (or "caching") scientifically interesting samples for possible return to Earth.
We recently spoke with Ken Farley, the mission's project scientist and the W.M. Keck Foundation Professor of Geochemistry at Caltech, to talk about the workshop and how the Mars 2020 landing site selection process is shaping up.
Can you tell us a little bit about how these workshops help the project select a landing site?
We are using the same basic site selection process that has been used for previous Mars rovers. It involves heavy engagement from the scientific community because there are individual experts on specific sites who are not necessarily on the mission's science team.
We put out a call for proposals to suggest specific sites, and respondents presented at the workshop. We provided presenters with a one-page template on which to indicate the characteristics of their landing site—basic facts, like what minerals are present. This became a way to distill a presentation into something that you could evaluate objectively and relatively quickly. When people flashed these rubrics up at the end of their presentations, there was some interesting peer review going on in real time.
We went through all 21 sites, talking about what was at each location. In the end, we needed to boil down the input and get a sense of which sites the community was most interested in. So we used a scorecard that tied directly to the mission objectives; there were five criteria, and attendees were able indicate how well they felt each site met each requirement by voting either "low, " "medium, " or "high." Then we tallied up the votes.
You mentioned that the criteria on the scorecard were related to the objectives of the mission. What are those objectives?
We have four mission objectives. One is to prepare the way for human exploration of Mars. The rover will have a weather station and an instrument that converts atmospheric carbon dioxide into oxygen—it's called the in situ resource utilization (ISRU) payload. This is a way to make oxygen for both human consumption and, even more importantly, for propellant. In terms of the landing site process, this objective was not a driving factor because the ISRU and the weather station don't really care where they go.
And the other three objectives?
We call the three remaining objectives the "ABC" goals. A is to explore the landing site. That's a basic part of a geologic study—you look around and see what's there and try to understand the geologic processes that made it.
The B goal is to explore an "astrobiologically relevant environment," to look for rocks in habitable environments that have the ability to preserve biosignatures— evidence of past or present life—and then to look for biosignatures in those rocks. The phrase that NASA attaches to our mission is "Seeking the Signs of Life." We have a bunch of science instruments on the rover that will help us meet those objectives.
Then the C goal is to prepare a returnable cache of samples. The word "returnable" has a technical definition—the cache has to meet a bunch of criteria, and one is that it has to have enough scientific merit to return. Previous studies of what constitutes returnability have suggested we need a number of samples in the mid 30s—we use the number 37.
It may seem strange, but there is a reason for this strange number. Thirty-seven is the maximum number of samples that can be packed into a circular honeycomb inside one possible design of the sample return assembly.
The huge task for us is to be able to drill that many samples. We've learned from MSL that everything takes a long time. Driving takes a long time, drilling takes a long time. We have a very specific mandate that we have to be capable of collecting 20 samples in the prime mission. Collecting at least 20 samples will motivate what we do in designing the rover.
It also has motivated a lot of the discussion of landing sites. You've got to have targets you wish to drill that are close together, and they can't be a long drive from where you land. There also has to be diversity because you don't want 15 copies of the same sample.
After all of those factors were considered, what was the outcome of the voting?
What came out of it was an ordered list of eight sites. One interesting thing about that list was that the sites were divided roughly equally into two kinds—those that were crater lakes with deltas and those that we would broadly call hydrothermal sites. These are locations that the community believes are most likely to have ancient life in them and preserve the evidence of it.
It's easy to understand the deltas because if you look in the terrestrial environment, a delta is an excellent place to look for organic matter. The things that are living in the water above the delta and upstream are washed into the delta when they die. Then mud packs in on top and preserves that material.
What is interesting about hydrothermal systems?
A hydrothermal system is in some ways very appealing but in some ways risky. These are places where rocks are hot enough to heat water to extremely high temperatures. At hydrothermal vents on Earth's sea floor, you have these strange creatures that are essentially living off chemical energy from inside the planet. And, in fact, the oldest evidence for life on Earth may have been found in hydrothermal settings. The problem is these settings are precarious; when the water gets a little too hot, everything dies.
What is the heat source for the hydrothermal sites on Mars?
There are two important heat sources—one is impact and the other is volcanic. A whole collection of our top sites are in a region next to a giant impact crater, and when you look at those rocks, they have chemical and mineralogical characteristics that look like hydrothermal alteration.
A leading candidate of the volcanic type is a site in Gusev Crater called the Columbia Hills site, which the Spirit rover studied. The rover came across a silica deposit. At the time, scientists didn't really know what it was, but it is now thought that the silica is actually a product of volcanic activity called sinter. The presenter for the site showed pictures from Spirit of these little bits of sinter and then showed pictures of something that looks almost exactly the same from a geothermal field in Chile. It was a pretty compelling comparison. Then he went on to show that these environments on Earth are very conducive to life and that the little silica blobs preserve biosignatures well.
So although it would be an interesting decision to invest another mission in the same location, that site was favored because it's the only place where a mineral that might contain signs of ancient life is known to exist with certainty.
Do these two types of sites differ just in terms of their ancient environments?
No. It turns out that you can see most of the deltas from Mars's orbit because they are pretty much the last gasp of processing of the martian surface. They date to a period about 3.6 billion years ago when the planet transitioned from a warm, wet period to basically being desiccated. Some of the hydrothermal sites may have rocks that are in the 4-billion-year-old range. That age difference may not sound like much, but in terms of an evolving planet that is dying, it raises interesting questions. If you want to allow the maximum amount of time for life to have evolved, maybe you choose a delta site. On the other hand, you might say, "Mars is dying at that point," and you want to try to get samples that include a record from an earlier, more equable period.
Since the community is divided roughly evenly between these two types of sites, one of the important questions we will have to wrestle with until the next workshop (in early 2017) is, "Which of those kinds of sites is more promising?" We need to engage a bigger community to address this question.
What happened to the list generated from this workshop?
This workshop was almost exclusively about science. The mission's leadership and members of the Mars 2020 Landing Site Steering Committee, appointed by NASA, then took the information from the workshop, rolled it up with information that the project had generated on things like whether the sites could be landed on, and came up with a list of eight sites in alphabetic order:
NE Syrtis Major
SW Melas Chasma
What comes next?
Over the course of the coming year, the Mars 2020 engineering team will continue its study of the feasibility of the highly ranked landing sites. At the same time, the science team will dig deeply into what is known about each site, seeking to identify the sites that are best suited to meet the mission's science goals. I expect that advocates for specific sites will also continue doing their homework to make the strongest possible case for their preferred site. And in 2017, we'll do the workshop all over again!
Yuk Yung, the Smits Family Professor of Planetary Science, has received the 2015 Gerard P. Kuiper Prize from the American Astronomical Society's Division for Planetary Sciences. The prize, given for outstanding contributions to the field of planetary science, recognizes Yung's work on atmospheric photochemistry, global climate change, radiative transfer, atmospheric evolution, and planetary habitability.
"His unique integration of observations, laboratory data, and quantitative modeling has yielded pioneering insights into the characterization, origin, and evolution of atmospheres in the solar system," the award citation notes.
Yung joined the Caltech faculty in 1977. He is a fellow of the American Academy of Arts and Sciences and of the American Association for the Advancement of Science. A longtime collaborator with scientists at the Jet Propulsion Laboratory (JPL), Yung is a coinvestigator on the Ultraviolet Imaging Spectrometer Experiment on the Cassini mission to Saturn and on the Orbital Carbon Observatory-2, a project to map CO2 concentrations on Earth.
Previous recipients of the Kuiper Prize include Professor of Planetary Science Andrew Ingersoll; Peter Goldreich, the Lee A. DuBridge Professor of Astrophysics and Planetary Physics, Emeritus; and Eugene M. Shoemaker, Caltech alumnus (BS'47, MS '48) and former chair of the Division of Geological and Planetary Sciences.
On Friday, August 7, 104 female high school seniors and their families visited Caltech for the fourth annual Women in STEM (WiSTEM) Preview Day, hosted by the undergraduate admissions office. The event was designed to explore the accomplishments and continued contributions of Caltech women in the disciplines of science, technology, engineering, and mathematics (STEM).
The day opened with a keynote address by Marianne Bronner, the Albert Billings Ruddock Professor of Biology and executive officer for neurobiology. Bronner, who studies the development of the central nervous system, spoke about her experiences in science and at Caltech.
"Caltech is an exciting place to be. It's a place where you can be creative and think outside the box," she said. "My advice to you would be to try different things, play around, and do what makes you happy." Bronner ended her address by noting the pleasure she takes in mentoring young scientists, and especially young women. "I was just like you," she said.
Over the course of the day, students and their families attended panels on undergraduate research opportunities and participated in social events where current students shared their experiences of Caltech life. They also listened to presentations from female scientists and engineers of the Jet Propulsion Laboratory.
"I really love science, and it's so exciting to be around all of these other people who share that," says Sydney Feldman, a senior from Maryland. "I switched around my whole summer visit schedule to come to this event and I'm having such a great time."
The annual event began four years ago with the goal of encouraging interest in STEM in high school women and ultimately increasing applications to Caltech by female candidates. In 2009, a U.S. Department of Commerce study showed that women make up 24 percent of the STEM workforce and hold a disproportionately low share of undergraduate degrees in STEM fields.
"Women are seriously underrepresented in these fields," says Caltech admissions counselor and WiSTEM coordinator Abeni Tinubu. "Our event really puts emphasis on how Caltech supports women on campus, and we want to show prospective students that."
This year, the incoming freshman class is a record 47 percent female students. "This is hugely exciting," says Jarrid Whitney, the executive director of admissions and financial aid. "We've been working hard toward our goal of 50 percent women, and it is clearly paying off thanks to the support of President Rosenbaum and the overall Caltech community."
For more than 20 years, Caltech geologist Jean-Philippe Avouac has collaborated with the Department of Mines and Geology of Nepal to study the Himalayas—the most active, above-water mountain range on Earth—to learn more about the processes that build mountains and trigger earthquakes. Over that period, he and his colleagues have installed a network of GPS stations in Nepal that allows them to monitor the way Earth's crust moves during and in between earthquakes. So when he heard on April 25 that a magnitude 7.8 earthquake had struck near Gorkha, Nepal, not far from Kathmandu, he thought he knew what to expect—utter devastation throughout Kathmandu and a death toll in the hundreds of thousands.
"At first when I saw the news trickling in from Kathmandu, I thought there was a problem of communication, that we weren't hearing the full extent of the damage," says Avouac, Caltech's Earle C. Anthony Professor of Geology. "As it turns out, there was little damage to the regular dwellings, and thankfully, as a result, there were far fewer deaths than I originally anticipated."
Using data from the GPS stations, an accelerometer that measures ground motion in Kathmandu, data from seismological stations around the world, and radar images collected by orbiting satellites, an international team of scientists led by Caltech has pieced together the first complete account of what physically happened during the Gorkha earthquake—a picture that explains how the large earthquake wound up leaving the majority of low-story buildings unscathed while devastating some treasured taller structures.
The findings are described in two papers that now appear online. The first, in the journal Nature Geoscience, is based on an analysis of seismological records collected more than 1,000 kilometers from the epicenter and places the event in the context of what scientists knew of the seismic setting near Gorkha before the earthquake. The second paper, appearing in ScienceExpress, goes into finer detail about the rupture process during the April 25 earthquake and how it shook the ground in Kathmandu.
Build Up and Release of Strain on Himalaya Megathrust(caption and credit in video attached in upper right)
In the first study, the researchers show that the earthquake occurred on the Main Himalayan Thrust (MHT), the main megathrust fault along which northern India is pushing beneath Eurasia at a rate of about two centimeters per year, driving the Himalayas upward. Based on GPS measurements, scientists know that a large portion of this fault is "locked." Large earthquakes typically release stress on such locked faults—as the lower tectonic plate (here, the Indian plate) pulls the upper plate (here, the Eurasian plate) downward, strain builds in these locked sections until the upper plate breaks free, releasing strain and producing an earthquake. There are areas along the fault in western Nepal that are known to be locked and have not experienced a major earthquake since a big one (larger than magnitude 8.5) in 1505. But the Gorkha earthquake ruptured only a small fraction of the locked zone, so there is still the potential for the locked portion to produce a large earthquake.
"The Gorkha earthquake didn't do the job of transferring deformation all the way to the front of the Himalaya," says Avouac. "So the Himalaya could certainly generate larger earthquakes in the future, but we have no idea when."
The epicenter of the April 25 event was located in the Gorkha District of Nepal, 75 kilometers to the west-northwest of Kathmandu, and propagated eastward at a rate of about 2.8 kilometers per second, causing slip in the north-south direction—a progression that the researchers describe as "unzipping" a section of the locked fault.
"With the geological context in Nepal, this is a place where we expect big earthquakes. We also knew, based on GPS measurements of the way the plates have moved over the last two decades, how 'stuck' this particular fault was, so this earthquake was not a surprise," says Jean Paul Ampuero, assistant professor of seismology at Caltech and coauthor on the Nature Geoscience paper. "But with every earthquake there are always surprises."
Propagation of April 2015 Mw 7.8 Gorkha Earthquake(caption and credit in video attached in upper right)
In this case, one of the surprises was that the quake did not rupture all the way to the surface. Records of past earthquakes on the same fault—including a powerful one (possibly as strong as magnitude 8.4) that shook Kathmandu in 1934—indicate that ruptures have previously reached the surface. But Avouac, Ampuero, and their colleagues used satellite Synthetic Aperture Radar data and a technique called back projection that takes advantage of the dense arrays of seismic stations in the United States, Europe, and Australia to track the progression of the earthquake, and found that it was quite contained at depth. The high-frequency waves that were largely produced in the lower section of the rupture occurred at a depth of about 15 kilometers.
"That was good news for Kathmandu," says Ampuero. "If the earthquake had broken all the way to the surface, it could have been much, much worse."
The researchers note, however, that the Gorkha earthquake did increase the stress on the adjacent portion of the fault that remains locked, closer to Kathmandu. It is unclear whether this additional stress will eventually trigger another earthquake or if that portion of the fault will "creep," a process that allows the two plates to move slowly past one another, dissipating stress. The researchers are building computer models and monitoring post-earthquake deformation of the crust to try to determine which scenario is more likely.
Another surprise from the earthquake, one that explains why many of the homes and other buildings in Kathmandu were spared, is described in the Science Express paper. Avouac and his colleagues found that for such a large-magnitude earthquake, high-frequency shaking in Kathmandu was actually relatively mild. And it is high-frequency waves, with short periods of vibration of less than one second, that tend to affect low-story buildings. The Nature Geoscience paper showed that the high-frequency waves that the quake produced came from the deeper edge of the rupture, on the northern end away from Kathmandu.
The GPS records described in the ScienceExpress paper show that within the zone that experienced the greatest amount of slip during the earthquake—a region south of the sources of high-frequency waves and closer to Kathmandu—the onset of slip on the fault was actually very smooth. It took nearly two seconds for the slip rate to reach its maximum value of one meter per second. In general, the more abrupt the onset of slip during an earthquake, the more energetic the radiated high-frequency seismic waves. So the relatively gradual onset of slip in the Gorkha event explains why this patch, which experienced a large amount of slip, did not generate many high-frequency waves.
"It would be good news if the smooth onset of slip, and hence the limited induced shaking, were a systematic property of the Himalayan megathrust fault, or of megathrust faults in general." says Avouac. "Based on observations from this and other megathrust earthquakes, this is a possibility."
In contrast to what they saw with high-frequency waves, the researchers found that the earthquake produced an unexpectedly large amount of low-frequency waves with longer periods of about five seconds. This longer-period shaking was responsible for the collapse of taller structures in Kathmandu, such as the Dharahara Tower, a 60-meter-high tower that survived larger earthquakes in 1833 and 1934 but collapsed completely during the Gorkha quake.
To understand this, consider plucking the strings of a guitar. Each string resonates at a certain natural frequency, or pitch, depending on the length, composition, and tension of the string. Likewise, buildings and other structures have a natural pitch or frequency of shaking at which they resonate; in general, the taller the building, the longer the period at which it resonates. If a strong earthquake causes the ground to shake with a frequency that matches a building's pitch, the shaking will be amplified within the building, and the structure will likely collapse.
Turning to the GPS records from two of Avouac's stations in the Kathmandu Valley, the researchers found that the effect of the low-frequency waves was amplified by the geological context of the Kathmandu basin. The basin is an ancient lakebed that is now filled with relatively soft sediment. For about 40 seconds after the earthquake, seismic waves from the quake were trapped within the basin and continued to reverberate, ringing like a bell with a frequency of five seconds.
"That's just the right frequency to damage tall buildings like the Dharahara Tower because it's close to their natural period," Avouac explains.
In follow-up work, Domniki Asimaki, professor of mechanical and civil engineering at Caltech, is examining the details of the shaking experienced throughout the basin. On a recent trip to Kathmandu, she documented very little damage to low-story buildings throughout much of the city but identified a pattern of intense shaking experienced at the edges of the basin, on hilltops or in the foothills where sediment meets the mountains. This was largely due to the resonance of seismic waves within the basin.
Asimaki notes that Los Angeles is also built atop sedimentary deposits and is surrounded by hills and mountain ranges that would also be prone to this type of increased shaking intensity during a major earthquake.
"In fact," she says, "the buildings in downtown Los Angeles are much taller than those in Kathmandu and therefore resonate with a much lower frequency. So if the same shaking had happened in L.A., a lot of the really tall buildings would have been challenged."
That points to one of the reasons it is important to understand how the land responded to the Gorkha earthquake, Avouac says. "Such studies of the site effects in Nepal provide an important opportunity to validate the codes and methods we use to predict the kind of shaking and damage that would be expected as a result of earthquakes elsewhere, such as in the Los Angeles Basin."
The Nepal Geodetic Array was funded by Caltech, the Gordon and Betty Moore Foundation, and the National Science Foundation. Additional funding for the Science study came from the Department of Foreign International Development (UK), the Royal Society (UK), the United Nations Development Programme, and the Nepal Academy for Science and Technology, as well as NASA and the Department of Foreign International Development.
A few seconds may not seem like long, but it is enough time to turn off a stove, open an elevator door, or take cover under a desk. And before an earthquake strikes, a few seconds of warning can save lives. The U.S. Geological Survey aims to provide those seconds of warning with ShakeAlert, an earthquake early-warning system now being tested on the west coast of the United States. On July 30, the USGS announced approximately $4 million in awards to Caltech, UC Berkeley, the University of Washington and the University of Oregon, for the expansion and improvement of the ShakeAlert system.
"Caltech's role in ShakeAlert will focus on research and development of the system so that future versions will be faster and more reliable," says Thomas Heaton (PhD '78), professor of engineering seismology and director of Caltech's Earthquake Engineering Research Laboratory. "We currently collect data from approximately 400 seismic stations throughout California. The USGS grant will allow Caltech to upgrade or install new stations in strategic locations that will significantly improve the performance of ShakeAlert."
Earthquakes radiate two kinds of seismic waves: fast-moving and often harmless P-waves, followed by S-waves, which can cause strong ground shaking. A system of seismometers called the California Integrated Seismic Network (CISN) acquires data streams literally at the speed of light and uses several algorithms to quickly pinpoint the earthquake's epicenter and determine its strength. ShakeAlert analyzes the first P-waves in the CISN data streams to send out digital alerts, providing the "early warning" to a region before the slower, destructive S-waves arrive.
While predicting when and where an earthquake will occur is impossible, this early-warning system can give necessary seconds of preparation. Current beta-test users receive these alerts as a pop-up on their computers, displaying a map of the affected region, the amount of time until shaking begins, the estimated magnitude of the quake, and other data. In the future, alerts may be available through text messages and phone apps.
Though still technically in testing stages, ShakeAlert has already provided successful warnings. In August 2014, the system provided a nine-second warning to the city of San Francisco during a magnitude 6.0 earthquake in South Napa. In May, during a magnitude 3.8 quake in Los Angeles, an alert was issued before S-waves had even reached the earth's surface.
"With this new USGS funding, we will be able to add 20 new sensors to CISN, making coverage more robust and thus lengthening warning times," says Egill Hauksson, a research professor of geophysics and a principal investigator along with Heaton on the ShakeAlert project. "Caltech and its partners will be able to continue the high-quality seismological research that is such a necessary foundation for a reliable earthquake early-warning system."
In 2011, Caltech, along with UC Berkeley and the University of Washington, Seattle, received $6 million from the Gordon and Betty Moore Foundation for the research and development of ShakeAlert.