Airborne Over Iceland: Charting Glacier Dynamics

Mark Simons, professor of geophysics at Caltech, along with graduate student Brent Minchew, recently logged over 40 hours of flight time mapping the surface of Iceland's glaciers. Flying over two comparatively small ice caps, Hofsjökull and Langjökull, they traveled with NASA pilots and engineers in a retrofitted Gulfstream III business jet, crisscrossing the glaciers numerous times. Using a radar instrument designed at the Jet Propulsion Laboratory (JPL) and mounted on the underbelly of the plane, they imaged the surface of the glaciers, obtaining precise data on the velocity at which these rivers of ice flow downstream.

Following a set of test flights in Iceland in 2009, Simons and Minchew went to Iceland in June 2012 to systematically image the two ice caps at the beginning of the summer melt season. They have just returned from a February 2014 expedition aimed at setting a baseline for glacier velocity—during the winter freeze, meltwater should not play as significant a role in glacier dynamics. They sat down recently to discuss the science and the adventure of monitoring Iceland's glaciers.

Why go to Iceland to study glaciers?

Mark Simons: Iceland is an ideal natural laboratory. The glaciers there are small enough that you can do detailed measurements of them, and afterward you can process the data and analyze each ice cap in its entirety without needing overwhelming computer resources. This manageable scale lets us explore a wide range of models. Glaciers in Greenland or Antarctica are far too big for that. Logistics are also a lot easier in Iceland. We can drive up to the glaciers in just a few hours from downtown Reykjavik.

Most importantly, the Icelanders have a long history of studying these ice caps. In particular, they have nearly complete maps of the ice-bedrock interface. We can complement this information with continuous maps of the daily movement or strain of the glacier surface as well as maps of the topography of the glacier surface. These data are then combined to constrain models of glacier dynamics.

How can you map bedrock that is under hundreds of feet of ice?

Brent Minchew: Our collaborators at the University of Iceland have been doing this work for decades. Helgi Björnsson and Finnur Pálsson mapped the subglacial bedrock by dragging long radar antennas behind snowmobiles driven over the glaciers. They use long-wavelength radar that penetrates through the ice to the underlying bedrock. By looking at the reflection of the radar signals, they can estimate where the interface is between ice and bedrock. They are expert at studying the cryosphere—the earth's frozen regions, including ice caps, glaciers, and sea ice—as you might expect given their location so far north of the equator.

Is this similar to the radar you use in your airplane flights over Iceland's glaciers?

Simons: It's a similar principle. Radar is an active imaging system, so unlike optical observations, where you're just looking at the reflected light from the sun, we're actually illuminating the surface like a flashlight, but using radar instead.

Was this radar technology developed specifically for imaging glaciers?

Minchew: No. The technique we use, InSAR [Interferometric Synthetic Aperture Radar], has been available since the mid-1990s. It has revolutionized a number of disciplines in the earth sciences, including glaciology. The system we are using in Iceland is truly state-of-the-art. It enables complete control over where and when we collect data, and it returns images with millions of independent pixels. It's a very rich data source.

Simons: Actually, the exact same airplane we use in Iceland to study glaciers is also used to measure motion above restless volcanoes due to changes in magma pressure or along major seismically active faults such as the San Andreas fault. Repeated radar imaging can show us the parts of the fault that are stuck—those are the places that will generate earthquakes every so often—and the other parts that are steadily creeping year after year. Basically, we're bringing our experience from earthquake physics, both in terms of observation and modeling, to see if it can help us address important problems in glaciology.

Are there other methods besides radar for studying glacier dynamics?

Minchew: We can drill to the bed and take direct measurements, but a lot of effort is involved in this. Compared to Greenland, where the ice is close to a mile thick, or Antarctica, where it is even thicker, Iceland's glaciers are relatively thin. But they're still on average 300 meters thick. That's a long way to drill down for one data point.

Simons: Traditionally people measured velocities of glaciers by putting stakes in the glacier, and then returning to see how far downstream those stakes had moved by the end of the melting season. This approach can give an average velocity over the season. We still utilize this principle by installing GPS units at various spots on the glacier. These GPS units also help us calibrate our radar-based measurements and confirm that our velocity estimates are accurate.

What advantages does radar have over these other methods?

Simons: One of the wonderful things about radar imaging, unlike optical imaging, is that we can "see" the glacier whether it's day or night, whether it's cloudy or clear.

Minchew: Right. Another major advantage of radar technology is that we don't just see the average velocity for the season; we can detect short-term dynamics and variability over the entire glacier if the imaging is done sufficiently often.

How exactly does radar work to image the ice cap?

Simons: Radar images are usually taken at oblique angles to the surface of the earth, not straight down in a perpendicular line. Given two radar images taken from nearly identical positions but at different times, we can combine them in such a way as to measure changes in ground position that occurred in the intervening period along the oblique direction of the transmitted energy. We quantify these displacements in terms of fractions of a radar wavelength. This process is called repeat pass interferometry. We design the plane's flight path to make several interferometric measurements from different viewing angles, in order that the surface of the glacier is imaged at least three times and often as many as six times. We then combine these different perspectives to create accurate 3-D maps of the surface velocity of the glaciers, detecting its underlying east, north, and up components.

How can you be so precise in your measurements from that high up in the air?

Simons: The altitude itself isn't a problem. The trick is making certain the plane is at the same absolute position over consecutive flights. We owe this precision to engineers at NASA/JPL; it has nothing to do with us down here at Caltech. They have developed the technology to fly this plane at 40,000 feet, at 450 miles per hour, and then to come back an hour later, a day later, or a year later, and fly that exact same path in coordinates relative to the ground. Essentially they are flying in a "virtual tube" in the air that's less than 10 meters in diameter. That's how accurate it is.

Minchew: Of course even within this virtual tube, the plane moves around; that's what aircraft do. But aircraft motion has a characteristic appearance in the data, and it's possible for us to remove this effect. It never ceases to amaze me that we can get centimeter-scale, even millimeter-scale accuracy from an airplane. But we can do it, and it works beautifully.

What was the motivation for JPL and NASA to develop this radar technology in the first place?

Simons: Part of what NASA has been doing with airborne radar technology is prototyping what they want to do with radar from satellites, and to understand the characteristics of this kind of measurement for different scientific targets. The instrument is called UAVSAR, for Uninhabited Aerial Vehicle Synthetic Aperture Radar. Right now it's clearly not uninhabited because the radar is on a plane with pilots and engineers on board. But the idea is that eventually we could do these radar measurements from a drone that would stay aloft making observations for a day or a day and a half at a stretch. We can also use satellites to make the same type of measurements.

Minchew: In ways, satellites are an easier platform for radar measurements. In space, there aren't a whole lot of dramatic perturbations to their motions; they fly a very steady path. But one advantage of an airborne platform is that we can collect data more frequently. We can sample the glacier surface every 24 hours if we wish. Satellites typically sample on the order of once a week to every several weeks.

What do you hope to learn from observing glacier dynamics in Iceland?

Simons: We want to use measurements of the ice cap to explore what is happening at the bottom of the glacier. We already know from the previous campaign in 2012 that over half of the movement measured in the early summer is associated with sliding at the bed rather than deformation of the ice. In the early part of the melt season, water gets down to the bottom of the glacier and doesn't have anywhere to go, so it increases the pressure at the bottom. It ends up reducing the friction so the glacier can flow faster over the bedrock. At some point there's so much water flow that it starts to make tunnels in the ice, and then the glacier drains more efficiently. But then the tunnels will collapse on themselves, and the whole glacier settles back down, compacting on itself. The glacier actually slides faster in the early part of the melt season than later in the melt season.

Minchew: The thing that propels glaciers is simply gravity. Ice is a viscous fluid, like honey. Very cold honey. Once it warms up and begins to melt slightly, the dynamics change tremendously. That's something we can observe in Iceland—unlike in Antarctica—where temperatures regularly go above the freezing point in summer. In Iceland, we think almost all the meltwater at the bed comes from surface melting. Geothermal heating from the earth and frictional heating from the sliding itself can also contribute to melting in Iceland's glaciers. These are the main sources of melting in Antarctica. But geothermal and frictional heating don't have anything to do with climate change nor should they vary with the seasons in the way that meltwater does.

Is climate change the major reason why you're studying glaciers?

Minchew: No, I just like cold and inhospitable places. Seriously, I was drawn to the field work aspect of geophysics, the opportunity to go to places in the world that are for the most part the way nature intends them to be. I'm also drawn to glaciers because they are fascinating and surprisingly complex physical systems. A number of fundamental problems in glaciology remain unsolved, so there is tremendous potential for discovery in this field. But helping to understand the potential effects of climate change is an obvious application of our work. People are much more interested in glaciers now as a result of climate change. One of the glaciologists at the University of Iceland likes to say, "We've turned a very cold subject into a hot one."

Simons: Iceland is actually a very good place to learn about how glaciers will react to climate change. We can watch these glaciers on a seasonal basis and see how they respond to temperature variation rather than trying to compare the behavior of those glaciers in Antarctica that have yet to experience surface melting to what we think their behavior might be 50 years from now. But for me, glaciology has always been interesting in itself. My job is to study the mechanics of the earth and how it deforms. And the cryosphere is just as much a part of that as the crust.

 

Simons's initial exploratory campaign on Iceland's glaciers was partially supported by the Terrestrial Hazard Observation and Reporting (THOR) Center at Caltech, funded by an endowed gift from Foster and Coco Stanback. Current efforts are supported by NASA.

Writer: 
Cynthia Eller
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Is Natural Gas a Solution to Mitigating Climate Change?

Methane, a key greenhouse gas, has more than doubled in volume in Earth's atmosphere since 1750. Its increase is believed to be a leading contributor to climate change. But where is the methane coming from? Research by atmospheric chemist Paul Wennberg of the California Institute of Technology (Caltech) suggests that losses of natural gas—our "cleanest" fossil fuel—into the atmosphere may be a larger source than previously recognized.

Radiation from the sun warms Earth's surface, which then radiates heat back into the atmosphere. Greenhouse gases trap some of this heat. It is this process that makes life on Earth possible for beings such as ourselves, who could not tolerate the lower temperatures Earth would have if not for its "blanket" of greenhouse gases. However, as Goldilocks would tell you, there is "too hot" as well as "too cold," and the precipitous increase in greenhouse gases since the beginning of the Industrial Revolution induces climate change, alters weather patterns, and has increased sea level. Carbon dioxide is the most prevalent greenhouse gas in Earth's atmosphere, but there are others as well, among them methane.

Those who are concerned about greenhouse gases have a very special enemy to fear in atmospheric methane. Methane has a trifecta of effects on the atmosphere. First, like other greenhouse gases, methane works directly to trap Earth's radiation in the atmosphere. Second, when methane oxidizes in Earth's atmosphere, it is broken into components that are also greenhouse gases: carbon dioxide and ozone. Third, the breakdown of methane in the atmosphere produces water vapor, which also functions as a greenhouse gas. Increased humidity, especially in the otherwise arid stratosphere where approximately 10 percent of methane is oxidized, further increases greenhouse-gas induced climate change.

Fully one-third of the increase in radiative forcing (the ability of the atmosphere to retain radiation from the sun) since 1750 is estimated to be due to the presence and effects of methane. Because of the many potential sources of atmospheric methane, from landfills to wetlands to petroleum processing, it can be difficult to quantify which sources are making the greatest contribution. But according to Paul Wennberg, Caltech's R. Stanton Avery Professor of Atmospheric Chemistry and Environmental Science and Engineering, and his colleagues, it is possible that a significant source of methane, at least in the Los Angeles basin, is fugitive emissions—leaks—from the natural-gas supply line.

"This was a surprise," Wennberg explains of the results of his research on methane in the Los Angeles atmosphere. In an initial study conducted in 2008, Wennberg's team analyzed measurements from the troposphere, the lowest portion of Earth's atmosphere, via an airplane flying less than a mile above the ground over the Los Angeles basin.

In analyzing chemical signatures of the preliminary samples, Wennberg's team made an intriguing discovery: the signatures bore a striking similarity to the chemical profile of natural gas. Normally, the methane from fossil fuel sources is accompanied by ethane gas—which is the second most common component of natural gas—while biogenic sources of methane (such as livestock and wastewater) are not. Indeed, the researchers found that the ratio of methane and ethane in the L.A. air samples was characteristic of the samples of natural gas provided by the Southern California Gas Company, which is the leading supplier of natural gas to the region.

Wennberg hesitates to pinpoint natural-gas leaks as the sole source of the L.A. methane, however. "Even though it looks like the methane/ethane could come from fugitive natural-gas emissions, it's certainly not all coming from this source," he says. "We're still drilling for oil in L.A., and that yields natural gas that includes ethane too."

The Southern California Gas Company reports very low losses in the delivery of natural gas (approximately 0.1 percent), and yet atmospheric data suggest that the source of methane from either the natural-gas infrastructure or petroleum production is closer to 2 percent of the total gas delivered to the basin. One possible way to reconcile these vastly different estimates is that significant losses of natural gas may occur after consumer metering in the homes, offices, and industrial plants that purchase natural gas. This loss of fuel is small enough to have no immediate negative impact on household users, but cumulatively it could be a major player in the concentration of methane in the atmosphere.

The findings of Wennberg and his colleagues have led to a more comprehensive study of greenhouse gases in urban settings, the Megacities Carbon Project, based at JPL. The goal of the project, which is focusing initially on ground-based measurements in Los Angeles and Paris, is to quantify greenhouse gases in the megacities of the world. Such cities—places like Hong Kong, Berlin, Jakarta, Johannesburg, Seoul, São Paulo, and Tokyo—are responsible for up to 75 percent of global carbon emissions, despite representing only 3 percent of the world's landmass. Documenting the types and sources of greenhouse gases in megacities will provide valuable baseline measurements that can be used in efforts to reduce greenhouse gas emissions.

If the findings of the Megacities Carbon Project are consistent with Wennberg's study of methane in Los Angeles, natural gas may be less of a panacea in the search for a "green" fuel. Natural gas has a cleaner emissions profile and a higher efficiency than coal (that is, it produces more power per molecule of carbon dioxide), but, as far as climate change goes, methods of extraction and distribution are key. "You have to dig it up, put it in the pipe, and burn it without losing more than a few percent," Wennberg says. "Otherwise, it's not nearly as helpful as you would think."

Wennberg's research was published in an article titled "On the Sources of Methane to the Los Angeles Atmosphere" in Environmental Science & Technology. Data for this study were provided by the Southern California Gas Company, NASA, NOAA, and the California Air Resources Board. The research was funded by NASA, the California Energy Commission's Public Interest Environmental Research program, the California Air Resources Board, and the U.S. Department of Energy.

Writer: 
Cynthia Eller
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Nanoscale Materials and Big Solar Energy: An Interview with Harry Atwater

As a high school student during the oil crisis of the 1970s, Harry Atwater recognized firsthand the impact of energy supply issues. Inspired to contribute to renewable energies, his research at Caltech today works to develop better thin-film photovoltaics—cheaper, lighter, more efficient alternatives to the bulky cells now used in solar panels.

In addition to his individual research interests in photovoltaic cell development, Atwater is also part of a collaborative effort to advance solar energy research at the Joint Center for Artificial Photosynthesis (JCAP)—a U.S. Department of Energy (DOE) Energy Innovation Hub. JCAP, which is led by a team of researchers from Caltech and partner Lawrence Berkeley National Laboratory, aims to develop cost-effective fuel production methods that require only sunlight, water, and carbon dioxide.

Atwater, who serves as the project leader for the Membrane and Mesoscale Assembly Project at JCAP, recently chatted with us about his research, his background, and why he came to Caltech.

What originally drew you to Caltech?

It was the opportunity to pursue my area of research. I felt that Caltech was the best research environment I could [be in] for mixing fundamental science and engineering technology. Caltech is very developed in its orientation toward engineering and technology, and its connection to technology in many areas like aerospace, photonics, communications, semiconductors and chemistry. It is a great combination—an institutional focus on fundamentals but also a focus on applying those fundamentals to engineer new technologies.

What are your research interests?

My research is at the intersection of solar energy and nanophotonic materials. Nanophotonic materials are materials and structures in which the characteristic length scale of the material is less than the scale of the wavelength of light—meaning that they're so small that they must be visualized with something that has a wavelength much smaller than that of visual light. Half of my research group is focused on the fundamentals of nanophotonic materials. These materials could form the building blocks of a chip-based optical device technology for improved imaging in computing, communications, and for the detection of chemical and biological molecules.

The other half of my group is focused on improving solar energy. We are investigating several approaches to creating very low-cost and ultrahigh-efficiency thin-film photovoltaics, which are an alternative to, and the future of, today's solar cell panels. In our design, we use thin layers of semiconductors for absorbing sunlight. The Joint Center for Artificial Photosynthesis (JCAP) fundamentally focuses on using semiconductor photonic materials and devices to create fuel from solar energy, so it's a really good match for our work.

How do these semiconductors you're working with make thin-film photovoltaics cheaper, thinner, and more efficient?

Most materials cost nearly the same amount when you just think about them on a price-per-atom basis. What makes materials expensive or cheap is the cost of the synthesis and processing methods used to make them with sufficient purity and perfection to enable high performance. Much of what we do is aimed at either designing new syntheses that can yield high-performance materials in a scalable low-cost fashion or designing new structures and devices whose performance is robust against use of impure or defective materials.

How did you first get interested in your field?

I would say that my interest in solar energy dates back to the first big energy crisis in the 1970s, when I was a high school student. I grew up in Pennsylvania, and I remember my school was shut down for a few weeks in the wintertime because there literally was no oil to heat the burner. I thought then that addressing supplies of energy was an important problem. It made a big impression on me. But at that point, I hadn't really thought about how I could contribute to a solution.

But then in graduate school, I got interested in things at the intersection of physics and electrical engineering, which is really where my work lies. As a graduate student at MIT, I began to focus on developing new technologies for thin-film solar cells. At MIT, I worked in one of the first nanostructure fabrication labs in the country, where it became apparent to me that we could make nanostructures and characterize their properties.

You were among the first scientists to study these nanostructures. What was that like?

Nowadays "nano" is sort of pervasive in the ether—nanomaterials are not unusual. At that time, it was as invisible to the general public as the Internet. It became obvious to me that there was a lot of opportunity to use nanofabrication principles and techniques to make new optical materials. Later, around 2001, we ended up playing a pretty significant role in starting another new field called plasmonics, which studies the behavior of the excitations created by light in metals. This new field led to the first serious and widespread efforts to make these kinds of optical devices and optical materials out of metals.

Do you have any hobbies or interests outside of your research?

I'm an avid soccer player, and I play weekly with the graduate students. Until my kids got to an age when I started embarrassing them, I was coaching them every week. That's what I like to do for fun.

Atwater joined the Caltech faculty as an assistant professor of applied physics in 1988, becoming an associate professor in 1994 and a professor in 1999. Now the Howard Hughes Professor of Applied Physics and Materials Science, Atwater has many roles on campus and beyond. These include serving as the director of the Resnick Sustainability Institute, the director of the Department of Energy's "Light-Material Interactions in Energy Conversion" Energy Frontier Research Center (LMI-EFRC), and most recently as the editor-in-chief of a new research journal, ACS Photonics.

Writer: 
Exclude from News Hub: 
No

Bacterial "Syringe" Necessary for Marine Animal Development

If you've ever slipped on a slimy wet rock at the beach, you have bacteria to thank. Those bacteria, nestled in a supportive extracellular matrix, form bacterial biofilms—often slimy substances that cling to wet surfaces. For some marine organisms—like corals, sea urchins, and tubeworms—these biofilms serve a vital purpose, flagging suitable homes for such organisms and actually aiding the transformation of larvae to adults.

A new study at the California Institute of Technology (Caltech) is the first to describe a mechanism for this phenomenon, providing one explanation for the relationship between bacterial biofilms and the metamorphosis of marine invertebrates. The results were published online in the January 9 issue of Science Express.

The study focused on a marine invertebrate that has become a nuisance to the shipping industry since its arrival in U.S. waters during the last half century: the tubeworm Hydroides elegans. The larvae of the invasive pest swim free in the ocean until they come into contact with a biofilm-covered surface, such as a rock or a buoy—or the hull of a ship. After the tubeworm larvae come in contact with the biofilm, they develop into adult worms that anchor to the surface, creating hard, mineralized "tubes" around their bodies. These tubes, which often cover the bottoms of ships, create extra drag in the water, dramatically increasing the ship's fuel consumption.

The tubeworms' unwanted and destructive presence on ships, called biofouling, is a "really bad problem," says Dianne Newman, a professor of biology and geobiology and Howard Hughes Medical Institute (HHMI) investigator at Caltech. "For example, biofouling costs the U.S. Navy millions of dollars every year in excess fuel costs," says Newman, who is also a coauthor of the study. And although researchers have known for decades that biofilms are necessary for tubeworm development, says Nicholas Shikuma, one of the two first authors on the study and a postdoctoral scholar in Newman's laboratory, "there was no mechanistic explanation for how bacteria can actually induce that process to happen. We wanted to provide that explanation."

Shikuma began by investigating Pseudoalteromonas luteoviolacea, a bacterial species known to induce metamorphosis in the tubeworm and other marine invertebrates. In earlier work, Michael G. Hadfield of the University of Hawai'i at Mānoa, a coauthor of the Science Express paper, had identified a group of P. luteoviolacea genes that were necessary for tubeworm metamorphosis. Near those genes, Shikuma found a set of genes that produced a structure similar to the tail of bacteriophage viruses.

The tails of these phage viruses contain three main components: a projectile tube, a contractile sheath that deploys the tube, and an anchoring baseplate. Together, the phage uses these tail components as a syringe, injecting their genetic material into host bacteria cells, infecting—and ultimately killing—them. To determine if the phage tail-like structures in P. luteoviolacea played a role in tubeworm metamorphosis, the researchers systematically deleted the genes encoding each of these three components.

Electron microscope images of the bacteria confirmed that syringe-like structures were present in "normal" P. luteoviolacea cells but were absent in cells in which the genes encoding the three structural components had been deleted; these genes are known as metamorphosis-associated contractile structure (mac) genes. The researchers also discovered that the bacterial cells lacking mac genes were unable to induce metamorphosis in tubeworm larvae. Previously, the syringe-like structures had been found in other species of bacteria, but in these species, the tails were deployed to kill other bacteria or insects. The new study provides the first evidence of such structures benefitting another organism, Shikuma says.

In order to view the three-dimensional arrangement of these unique structures within intact bacteria, the researchers collaborated with the laboratory of Grant Jensen, professor of biology and HHMI investigator at Caltech. Utilizing a technique called electron cryotomography, the researchers flash-froze the bacterial cells at very low temperatures. This allowed them to view the cells and their internal structures in their natural, "near-native" states.

Using this visualization technique, Martin Pilhofer, a postdoctoral scholar in Jensen's lab and the paper's other first author, discovered something unique about the phage tail-like structures within P. luteoviolacea; instead of existing as individual appendages, the structures were linked together to create a spiny array. "In these arrays, about 100 tails are stuck together in a hexagonal lattice to form a complex with a porcupine-like appearance," Shikuma says. "They're all facing outward, poised to fire," he adds. "We believe this is the first observation of arrays of phage tail-like structures."

Initially, the array is compacted within each bacterium; however, the cells eventually burst—killing the microbes—and the array unfolds. The researchers hypothesize that, at this point, the individual spines of the array fire outward into the tubeworm larva. Following this assault, the larvae begin their developmental transition to adulthood.

"It was a tremendous surprise that the agent that drives metamorphosis is such an elaborate, well-organized injection machine," says coauthor Jensen. "Who would have guessed that the signal is delivered by an apparatus that is almost as large as the bacterial cell itself? It is simply a marvelous structure, synthesized in a 'loaded' but tightly collapsed state within the cell, which then expands like an umbrella, opening up into a much larger web of syringes that are ready to inject," he says.

Although the study confirms that the phage tail-like structures can cause tubeworm metamorphosis, the nature of the interaction between the tail and the tubeworm is still unknown, Shikuma says. "Our next step is to determine whether metamorphosis is caused by an injection into the tubeworm larva tissue, and, then, if the mechanical action is the trigger, or if the bacterium is injecting a chemical morphogen," he says. He and his colleagues would also like to determine if mac genes and the tail-like structures they encode might influence other marine invertebrates, such as corals and sea urchins, that also rely on P. luteoviolacea biofilms for metamorphosis.

Understanding this process might one day help reduce the financial losses from P. luteoviolacea biofilm fouling on ship hulls, for example. While applications are a long way off, Newman says, it is also interesting to speculate on the possibility of leveraging metamorphosis induction in beneficial marine invertebrates to improve yields in aquaculture and promote coral reef growth.

The study, the researchers emphasize, is an example of the collaborative research that is nurtured at Caltech. For his part, Shikuma was inspired to utilize electron cryotomography after hearing a talk by Martin Pilhofer at the Center for Environmental Microbiology Interactions (CEMI) at Caltech. "Martin gave a presentation on another type of phage tail-like structures in the monthly CEMI seminar. I saw his talk and I thought that the mac genes I was working with might somehow be related," Shikuma says. Their subsequent collaboration, Newman says, made the current study possible.

The paper is titled "Marine tubeworm metamorphosis induced by arrays of bacterial phage tail-like structures." Gregor L. Weiss, a Summer Undergraduate Research Fellowship student in Jensen's laboratory at Caltech, was an additional coauthor on the study. The published work was funded by a Caltech Division of Biology Postdoctoral Fellowship (to N. Shikuma), the Caltech CEMI, the Howard Hughes Medical Institute, the Office of Naval Research, the National Institutes of Health, and the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
Saturday, November 9, 2013
Cahill, Hameetman Auditorium – Cahill Center for Astronomy and Astrophysics

Beyond Rhetoric: Real Solutions for the Climate Change Crisis

Caltech Students Arrive at Solar Decathlon 2013

DALE is nearly ready to face the judges. The Dynamic Augmented Living Environment, Caltech's collaboration with the Southern California Institute of Architecture (SCI-Arc) is now on-site at the Department of Energy's 2013 Solar Decathlon competition site in Irvine, California.

The SCI-Arc/Caltech team has been planning DALE, its unique and completely solar-powered home, since its competition proposal was accepted in January 2012, along with proposals from 19 other American and international teams. Nearly 40 Caltech students participated in the design process, most of which took place in an engineering project course called Introduction to Multidisciplinary Systems Engineering, and offered during the 2012-2013 academic year. Of the students in this course, taught by Melany Hunt, Dotty and Dick Hayman Professor of Mechanical Engineering and a vice provost, seven stayed on, spending their summer actually building the sustainable house.

Once the majority of construction was complete in late September, the SCI-Arc/Caltech team had to pack up DALE and physically move the entire house from its construction site on the SCI-Arc campus in downtown Los Angeles more than 40 miles south to Orange County Great Park in Irvine, where this year's competition will be held starting on October 3.

While some of DALE's competitors had to employ the use of large cranes to transport their entries or coordinate weeks-long international transportation to the competition site, project manager Andrew Gong (BS '12) says that DALE only spent about three-and-a-half hours in transit. "We picked up DALE with a heavy-duty forklift and placed it on long trucks," Gong says. "And there wasn't any damage other than expected small scrapes to the bottom from the forks."

DALE's design consists of two configurable, box-like modules—one kitchen and bathroom module, and one living and sleeping space module—that can move together or apart. When in the open configuration, DALE's design exploits the ambient outdoor temperature to heat or cool the house, helping to maintain a comfortable temperature within the house without using extra energy for heating and air-conditioning. This moving house was designed with sustainability in mind, but the modules also made it easier for team DALE to truck its house down the interstate to Irvine. "Since the house is composed of modules, it was actually fairly simple to pack up and ship. The main issue was just making sure everything got packed on time," Gong says.

The SCI-Arc/Caltech collaboration is one of 20 teams in the Department of Energy competition, each challenged to design and build affordable, attractive, energy-efficient houses that have the comforts of modern living but are powered only by the sun. As the name "Solar Decathlon" implies, teams will compete for the best total number of points in 10 contests. A panel of experts will use the contests to judge and score the entries based on features ranging from architecture and market appeal to affordability and each house's ability to host a movie night—called the Home Entertainment Contest.

The SCI-Arc/Caltech team wants DALE to score well in the overall competition, but the Caltech team members hope they score especially well in one particular aspect: the Engineering Contest. In this contest, a jury made up of professional engineers will judge each house based on the home's functionality, efficiency, innovation, reliability, and project documentation. "In 2011, with CHIP—our first Solar Decathlon entry—we came in second place. With DALE, we took that second place as a challenge," says Gong, "because now we have to get first, obviously!"

To optimize DALE's energy efficiency, the Caltech members of the team spent months calculating and modeling the home's likely energy usage during the competition. Thirty years of Orange County weather data were used to predict heating and air-conditioning needs for the October competition. "The contests are held on different days of the competition, and we based the energy budget on the contests we will have on a given day: the cooking contest, movie night, heating and air-conditioning test, etc.," Gong says. "With our competition energy budget, we then modeled out the performance required from the solar panels to meet that energy demand," Gong says.

According to their calculations, DALE's oversized solar panels will allow the house to be net-zero during the decathlon—meaning it will produce as much energy as it consumes. And in the future, if the house was used in the longer daylight hours of summer, DALE could produce far more energy than it uses, Gong adds.

The SCI-Arc/Caltech team also designed an energy-saving mobile app for DALE that would allow its owner to monitor the home's real-time energy supply and consumption and take steps to use less energy. "As a team, we are aiming to create a house that is not only energy efficient by itself, but also encourages the inhabitants to live a greener lifestyle," Caltech electrical engineering student Do Hee Kim says on the DALE website. "We have made it simple for homeowners to execute these actions by having the ability to remotely turn on and off home appliances," Kim says.

Visitors will be able to interact with DALE and explore its innovative features during public viewings scheduled for October 3–6 and 10–13 from 11 a.m. to 7 p.m. In addition, visitors arriving at 2:30 will get to see the home reconfigured in real time, a feature that sets DALE apart from the other Solar Decathlon entries. The winners will be announced on October 12, and just a few days later, the team will pack up for the move back to Los Angeles. After the competition, DALE will be displayed at the SCI-Arc campus.

And for any house hunters visiting the competition, the SCI-Arc/Caltech team has good news: DALE is for sale and can be delivered to a new owner. Although the Department of Energy provides a limited amount of seed money for Solar Decathlon teams, fundraising is necessary to cover the actual costs of production; funds from the sale of the home will go to recoup some this year's competition costs and could also help support an entry bid for the 2015 Solar Decathlon.

Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Meet DALE: Solar Decathlon 2013 Construction is Under Way

Trading in their textbooks for power tools this summer, a group of nine Caltech students and recent graduates have had a unique opportunity to apply their classroom knowledge to real-world challenges. Along with students in architectural design from the Southern California Institute of Architecture (SCI-Arc), the Caltech students have spent their summer building the Dynamic Augmented Living Environment (DALE), a joint SCI-Arc/Caltech entry in the 2013 Solar Decathlon competition. DALE marks Caltech's second collaboration with SCI-Arc, following their Compact Hyper-Insulated Prototype (CHIP), the partnership's first Solar Decathlon entry, in 2011.

Sponsored by the Department of Energy, the biennial Solar Decathlon competition challenges collegiate teams to "design, build, and operate solar-powered houses that are cost-effective, energy-efficient, and attractive." Contest rules state that each entry must be a net-zero home, meaning that its solar panels must produce at least as much energy as the home uses.

Construction on the SCI-Arc/Caltech collaboration began in March, when DALE's cement foundation was poured. In April, the home's steel frames were dropped in, allowing the students (guided by a few construction professionals) to begin nailing the lumber into place.

As of August, the home is starting to take shape; the bathroom has been framed out, the kitchen cabinets are set for installation, and soon the house will be sporting a vinyl exterior and a set of moving canopies that will hold its solar panels.

Although construction work only began a few months ago, the Caltech students began planning for DALE last fall in an engineering project course called Introduction to Multidisciplinary Systems Engineering, taught by Melany Hunt, Dotty and Dick Hayman Professor of Mechanical Engineering and a vice provost.

"I really like this project because it's very hands-on," says DALE team member Zeke Millikan (BS '13, mechanical engineering). "A lot of classes at Caltech are very theoretical, and I'm more of a hands-on type of person. It's really satisfying to actually build something and see it come together."

"Prior to this summer," says DALE team member Sheila Lo ('16), "I didn't really have a lot of experience in construction, so I spent a lot of time learning the terminology and how to use which tools in certain situations. As one of the youngest members of the team, it's been a great privilege to work with upperclassmen and recent graduates because they've taught me a lot about dedication to a project and what it means to apply the skills you learn at Caltech."

And this dedication will be important in the coming weeks, as there is still plenty of work to be done for the early-October competition. Unlike the five previous Solar Decathlons, which were held in Washington, D.C., this year's event will take place in nearby Irvine, California. "Having the competition just right down the road from us inspired the design," says DALE team member Ella Seal (BS '13, mechanical engineering).

To capitalize on Southern California's mild climate, DALE is made up of two moving modules that can glide apart on warm sunny days, creating an open indoor courtyard that can triple the home's available living space. During inclement weather—and for enhanced safety and privacy—DALE's modules can also move together, creating an enclosed home of about 600 square feet.

The home's untraditional moving design—conceived by SCI-Arc team members—is more than just eye-catching. "It also will actually save energy and money over the course of the year," says Seal. By varying the configurations of DALE's modules and shade canopies—the same ones that will hold DALE's solar panels—the Caltech students were able to optimize energy efficiency during different times of the day without sacrificing comfort. "During the summer, the air-conditioning energy consumption drops by at least half when you are able to open up the house and adjust the shading depending on the weather outside," says Millikan.

But a moving house also presents several engineering challenges, says Seal. Wires for electricity and pipes for plumbing had to be specially designed for their moving platform. Seal and Millikan were also tasked with creating a foolproof safety mechanism for DALE's movement systems. Applying their backgrounds in mechanical engineering, they created a system of laser beams, light curtains, and pressure sensors that acts "basically like a garage door sensor on steroids," says Millikan. "We think we've addressed pretty much every scenario where someone could get seriously hurt."

In addition to the movement systems, students from Caltech are responsible for designing the home's heating, ventilation, and air-conditioning system; hot water system; photovoltaic arrays; and other engineering aspects of the solar-powered home. As well as their technical contributions, the Caltech students will collaborate with their SCI-Arc teammates on publicity and fund-raising efforts and the compilation of a final written report.

"I appreciate the fact that it's not just engineering," says Seal. "I really like the fact that we have to write an engineering narrative, describing all of the really cool innovations that we've built into the house. It's not necessarily something that I would get to do if I took a different project class at Caltech."

This type of multidisciplinary and collaborative experience is important for Caltech students, notes Hunt. "Engineering students need experiences in which they design, create, build, and test," she says. "They also should have opportunities in which they work as part of a team. Most engineering projects require multiple perspectives with input coming from a range of individuals with different expertise and vision."

In addition to Millikan, Seal, and Lo, the DALE team includes current Caltech students Brynan Qui ('15), Do Hee Kim ('15), Sharon Wang ('16), as well as recent graduates Tony Wu (BS '13, mechanical engineering and business economics and management) and Christine Viveiros (BS '13, mechanical engineering), and project manager Andrew Gong (BS '12, chemical engineering [materials]). The SCI-Arc/Caltech project, along with other entries for this year's Solar Decathlon competition, will be open to the public October 3–6 and 10–13 at the Orange County Great Park in Irvine, California.

Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Caltech's Unique Wind Projects Move Forward

Caltech fluid-mechanics expert John Dabiri has some big plans for a high school in San Pedro, military bases in California, and a small village on Bristol Bay, Alaska—not to mention for the future of wind power generation, in general.

Back in 2009, Dabiri, a professor of aeronautics and bioengineering, was intrigued by the pattern of spinning vortices that trail fish as they swim. Curious, he assigned some graduate students to find out what would happen to a wind farm's power output if its turbines were spaced like those fishy vortices. In simulations, energy production jumped by a factor of 10. To prove that the same effect would occur under real-world conditions, Dabiri and his students established a field site in the California desert with 24 turbines. Data gathered from the site proved that placing turbines in a particular orientation in relation to one another profoundly improves their energy-generating efficiency.

The turbines Dabiri has been investigating aren't the giant pinwheels with blades like propellers—known as horizontal-axis wind turbines (HAWTs)—that most people envision when they think about wind power. Instead, Dabiri's group uses much shorter turbines that look something like egg beaters sticking out of the ground. Dabiri and his colleagues believe that with further development, these so-called vertical-axis wind turbines (VAWTs) could dramatically decrease the cost, footprint, and environmental impact of wind farms.

"We have been able to demonstrate that using wind turbines that are 30 feet tall, as opposed to 300 feet tall, could generate sufficient power for wind-farm applications," Dabiri says. "That's important for us because our approach to getting to lower-cost energy is through the use of smaller vertical-axis wind turbines that are simpler—for example, they have no gearbox and don't need to be pointed in the direction of the oncoming wind—and whose performance can be optimized by arranging them properly."

Even as Dabiri and his group continue to study the physics of the wind as it moves through their wind farm and to develop computer models that will help them to predict optimal configurations for turbines in different areas, they are now beginning several pilot projects to test their concept.

"One of the areas where these smaller turbines can have an immediate impact is in the military," says Dabiri. Indeed, the Department of Defense is one of the largest energy consumers in the country and is interested in using renewable methods to meet some of that need. However, one challenge with the use of wind energy is that large HAWTs can interfere with helicopter operations and radar signatures. Therefore, the Office of Naval Research is funding a three-year project by Dabiri's group to test the smaller VAWTs and to further develop software tools to determine the optimal placement of turbines. "We believe that these smaller turbines provide the opportunity to generate renewable power while being complementary to the ongoing activities at the base," Dabiri says.

A second pilot project, funded by the Los Angeles Unified School District, will create a small wind farm that will help to power a new school while teaching its students about wind power. San Pedro High School's John M. and Muriel Olguin Campus, which opened in August 2012, was designed to be one of the greenest schools ever built, with solar panels, artificial turf, and a solar-heated pool—and the plan has long included the use of wind turbines.

"Here, the challenge is that you have a community nearby, and so if you used the very large horizontal-axis wind turbines, you would have the potential issue of the visual signature, the noise, and so on," Dabiri says. "These smaller turbines will be a demonstration of an alternative that's still able to generate wind energy but in a way that might be more agreeable to these communities."

That is one of the major benefits of VAWTs: being smaller, they fit into a landscape far more seamlessly than would 100-meter-tall horizontal-axis wind turbines. Because VAWTs can also be placed much closer to one another, many more of them can fit within a given area, allowing them to tap into more of the wind energy available in that space than is typically possible. What this all means is that a very productive wind farm can be built that has a lower environmental impact than previously possible.

That is especially appealing in roadless areas such as Alaska's Bristol Bay, located at the eastern edge of the Bering Sea. The villages around the bay—a crucial ecosystem for sockeye salmon—face particular challenges when it comes to meeting their energy needs. The high cost of transporting diesel fuel to the region to generate power creates a significant barrier to sustainable economic development. However, the region also has strong wind resources, and that's where Dabiri comes in.

With funding from the Gordon and Betty Moore Foundation, Dabiri and his group, in collaboration with researchers at the University of Alaska Fairbanks, will be starting a three-year project this summer to asses the performance of a VAWT wind farm in a village called Igiugig. The team will start by testing a few different VAWT designs. Among them is a new polymer rotor, designed by Caltech spinoffs Materia and Scalable Wind Solutions, which may withstand icing better than standard aluminum rotors.

"Once we've figured out which components from which vendors are most effective in that environment, the idea is to expand the project next year, to have maybe a dozen turbines at the site," Dabiri says. "To power the entire village, we'd be talking somewhere in the 50- to 70-turbine range, and all of those could be on an acre or two of land. That's one of the benefits—we're trying to generate the power without changing the landscape. It's pristine, beautiful land. You wouldn't want to completely change the landscape for the sake of producing energy."

Video and images of Dabiri's field site in the California desert can be found at http://dabiri.caltech.edu/research/wind-energy.html.

 

The Gordon and Betty Moore Foundation, established in 2000, seeks to advance environmental conservation and scientific research around the world and improve the quality of life in the San Francisco Bay Area. The Foundation's Science Program aims to make a significant impact on the development of provocative, transformative scientific research, and increase knowledge in emerging fields.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Frances Arnold Wins Eni Award for Renewable-Energy Work

PASADENA, Calif.—For the second year in a row, a faculty member from the California Institute of Technology (Caltech) has been awarded the Eni Award in Renewable and Non-Conventional Energy. This year, chemical engineer Frances Arnold—who pioneered methods of "directed evolution" for the production and optimization of biological catalysts—has been chosen to receive the distinction, along with her colleague James Liao of UCLA.

Arnold, Caltech's Dick and Barbara Dickinson Professor of Chemical Engineering, Bioengineering and Biochemistry, has shown that mimicking Darwinian evolution in the laboratory is an efficient way to engineer the amino-acid sequence of a protein, endowing it with new capabilities or improving its performance. Arnold and her colleagues have used directed evolution to improve catalysts for making fuels and chemicals from renewable resources.

"There are a lot of creative people working on renewable and non-conventional energy, so it is a huge honor to be selected for this distinction," Arnold says. "This prize recognizes the basic technology we've developed over the years, but especially the application of directed evolution to making things that we currently get from non-renewable hydrocarbons."

The Eni Awards are international prizes that recognize outstanding research and development in the fields of energy and the environment. Eni is an integrated energy company based in Italy. According to the company's website, "The Eni Award was created to develop better use of renewable energy, promote environmental research and encourage new generations of researchers."

A 24-person scientific award committee selects the honorees each year in four categories: New Frontiers of Hydrocarbons, Renewable and Non-Conventional Energy, Protection of the Environment, and Debut in Research. Three additional prizes are awarded for innovative and applied research within Eni, in energy and the environment.

In 2012, Harry A. Atwater, Caltech's Howard Hughes Professor and professor of applied physics and materials science, and director of the Resnick Sustainability Institute, along with his colleague Albert Polman of the Dutch Research Institute AMOLF, was awarded the same Eni Award in Renewable and Non-Conventional Energy, for developing new ultrathin, high-efficiency solar cells.

Of Caltech's back-to-back Eni Awards, Arnold says, "It shows that the renewable-energy research going on at Caltech is world-class. Other places may have much bigger programs, but for impact and accomplishment, the research that the Resnick Institute supports is recognized throughout the world as being at the very top. These groups are making real progress on some of the most important problems we face today."

Arnold, Liao, and the other 2013 awardees will receive their prizes on June 27 at the Presidential Palace in Rome.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
Yes

Fifty Years of Clearing the Skies

A Milestone in Environmental Science

Ringed by mountains and capped by a temperature inversion that traps bad air, Los Angeles has had bouts of smog since the turn of the 20th century. An outbreak in 1903 rendered the skies so dark that many people mistook it for a solar eclipse. Angelenos might now be living in a state of perpetual midnight—assuming we could live here at all—were it not for the work of Caltech Professor of Bio-organic Chemistry Arie Jan Haagen-Smit. How he did it is told here largely in his own words, excerpted from Caltech's Engineering & Science magazine between 1950 and 1962. (See "Related Links" for the original articles.)

Old timers, which in California means people who have lived here some 25 years, will remember the invigorating atmosphere of Los Angeles, the wonderful view of the mountains, and the towns surrounded by orange groves. Although there were some badly polluted industrial areas, it was possible to ignore them and live in more pleasant locations, especially the valleys . . . Just 20 years ago, the community was disagreeably surprised when the atmosphere was filled with a foreign substance that produced a strong irritation of the eyes. Fortunately, this was a passing interlude which ended with the closing up of a wartime synthetic rubber plant. (November 1962)

Alas, the "interlude" was an illusion. In the years following World War II, visibility often fell to a few blocks. The watery-eyed citizenry established the Los Angeles County Air Pollution Control District (LACAPCD) in 1947, the first such body in the nation. The obvious culprits—smoke-belching power plants, oil refineries, steel mills, and the like—were quickly regulated, yet the problem persisted. Worse, this smog was fundamentally different from air pollution elsewhere—the yellow, sulfur-dioxide-laced smog that killed 20 people in the Pennsylvania steel town of Donora in 1948, for example, or London's infamous pitch-black "pea-soupers," where the burning of low-grade, sulfur-rich coal added soot to the SO2. (The Great Smog of 1952 would carry off some 4,000 souls in four days.) By contrast, L.A.'s smog was brown and had an acrid odor all its own.

Haagen-Smit had honed his detective skills isolating and identifying the trace compounds responsible for the flavors of pineapples and fine wines, and in 1948 he began to turn his attention to smog.

Chemically, the most characteristic aspect of smog is its strong oxidizing action . . . The amount of oxidant can readily be determined through a quantitative measurement of iodine liberated from potassium iodide solution, or of the red color formed in the oxidation of phenolphthalin to the well-known acid-base indicator, phenolphthalein. To demonstrate these effects, it is only necessary to bubble a few liters of smog air through the colorless solutions. (December 1954)

His chief suspect was ozone, a highly reactive form of oxygen widely used as a bleach and a disinfectant. It's easy to make—a spark will suffice—and it's responsible for that crisp "blue" odor produced by an overloaded electric motor. But there was a problem:

During severe smog attacks, ozone concentrations of 0.5 ppm [parts per million], twenty times higher than in [clean] country air, have been measured. From such analyses the quantity of ozone present in the [Los Angeles] basin at that time is calculated to be about 500 tons.

Since ozone is subject to a continuous destruction in competition with its formation, we can estimate that several thousand tons of ozone are formed during a smog day. It is obvious that industrial sources or occasional electrical discharges do not release such tremendous quantities of ozone. (December 1954)

If ozone really was to blame, where was it coming from? An extraordinary challenge lay ahead:

The analysis of air contaminants has some special features, due to the minute amounts present in a large volume of air. The state in which these pollutants are present—as gases, liquids and solid particles of greatly different sizes—presents additional difficulties. The small particles of less than one micron diameter do not settle out, but are in a stable suspension and form so-called aerosols.

The analytical chemist has devoted a great deal of effort to devising methods for the collection of this heterogeneous material. Most of these methods are based on the principle that the particles are given enough speed to collide with each other or with collecting surfaces . . . A sample of Los Angeles' air shows numerous oily droplets of a size smaller than 0.5 micron, as well as crystalline deposits of metals and salts . . . When air is passed through a filter paper, the paper takes on a grey appearance, and extraction with organic solvents gives an oily material. (December 1950)

Haagen-Smit suspected that this oily material, a complex brew of organic acids and other partially oxidized hydrocarbons, was smog's secret ingredient. In 1950, he took a one-year leave of absence from Caltech to prove it, working full-time in a specially equipped lab set up for him by the LACAPCD. By the end of the year, he had done so.

Through investigations initiated at Caltech, we know that the main source of this smog is due to the release of two types of material. One is organic material—mostly hydrocarbons from gasoline—and the other is a mixture of oxides of nitrogen. Each one of these emissions by itself would be hardly noticed. However, in the presence of sunlight, a reaction occurs, resulting in products which give rise to the typical smog symptoms. The photochemical oxidation is initiated by the dissociation of NO2 into NO and atomic oxygen. This reactive oxygen attacks organic material, resulting in the formation of ozone and various oxidation products . . . The oxidation reactions are generally accompanied by haze or aerosol formation, and this combination aggravates the nuisance effects of the individual components of the smog complex. (November 1962)

Professor of Plant Physiology Frits Went was also on the case. Went ran Caltech's Earhart Plant Research Laboratory, which he proudly called the "phytotron," by analogy to the various "trons" operated by particle physicists. (Phyton is the Greek word for plant.) "Caltech's plant physiologists happen to believe that the phytotron is as marvellously complicated as any of the highly-touted 'atom-smashing' machines," Went wrote in E&S in 1949. "[It] is the first laboratory in the world in which plants can be grown under every possible climatic condition. Light, temperature, humidity, gas content of the air, wind, rain, and fog—all these factors can be simultaneously and independently controlled. The laboratory can create Sacramento Valley climate in one room and New England climate in another." Most of Los Angeles was still orchards and fields instead of tract houses, and the smog was hurting the produce. Went, the LACAPCD, and the UC Riverside agricultural station tested five particularly sensitive crops in the phytotron, Haagen-Smit wrote.

The smog indicator plants include spinach, sugar beet, endive, alfalfa and oats. The symptoms on the first three species are mainly silvering or bronzing of the underside of the leaf, whereas alfalfa and oats show bleaching effects. Some fifty compounds possibly present in the air were tested on their ability to cause smog damage—without success. However, when the reaction products of ozone with unsaturated hydrocarbons were tried, typical smog damage resulted. (December 1950)

And yet a third set of experiments was under way. Rubber tires were rotting from the smog at an alarming rate, cracking as they flexed while rolling along the road. Charles E. Bradley, a research associate in biology, turned this distressing development into a cheap and effective analytical tool by cutting rubber bands by the boxful into short segments. The segments—folded double, secured with a twist of wire, and set outside—would start to fall apart almost before one could close the window. "During severe smog initial cracking appears in about four minutes, as compared to an hour or more required on smog-free days, or at night," Haagen-Smit wrote in the December 1954 E&S.

The conclusion that airborne gasoline and nitrogen oxides (another chief constituent of automobile exhaust) were to blame for smog was not well received by the oil refineries, who hired their own expert to prove him wrong. Abe Zarem (MS '40, PhD '44), the manager and chairman of physics research for the Stanford Research Institute, opined that stratospheric ozone seeping down through the inversion layer was to blame. But seeing (or smelling) is believing, so Haagen-Smit fought back by giving public lectures in which he would whip up flasks full of artificial smog before the audience's eyes, which would soon be watering—especially if they were seated in the first few rows. By the end of his talk, the smog would fill the hall, and he became known throughout the Southland as Arie Haagen-Smog.

By 1954, he and Frits Went had carried the day.

[Plant] fumigations with the photochemical oxidation products of gasoline and nitrogen dioxide (NO2) was the basis of one of the most convincing arguments for the control of hydrocarbons by the oil industry. (December 1954)

It probably didn't hurt that an outbreak that October closed schools and shuttered factories for most of the month, and that angry voters were wearing gas masks to protest meetings. By then, there were some two million cars on the road in the metropolitan area, spewing a thousand tons of hydrocarbons daily.

Incomplete combustion of gasoline allows unburned and partially burned fuel to escape from the tailpipe. Seepage of gasoline, even in new cars, past piston rings into the crankcase, is responsible for 'blowby' or crankcase vent losses. Evaporation from carburetor and fuel tank are substantial contributions, especially on hot days. (November 1962)

Haagen-Smit was a founding member of California's Motor Vehicle Pollution Control Board, established in 1960. One of the board's first projects was testing positive crankcase ventilation (PCV) systems, which sucked the blown-by hydrocarbons out of the crankcase and recirculated them through the engine to be burned on the second pass. PCV systems were mandated on all new cars sold in California as of 1963. The blowby problem was thus easily solved—but, as Haagen-Smit noted in that same article, it was only the second-largest source, representing about 30 percent of the escaping hydrocarbons.

The preferred method of control of the tailpipe hydrocarbon emission is a better combustion in the engine itself. (The automobile industry has predicted the appearance of more efficiently burning engines in 1965. It is not known how efficient these will be, nor has it been revealed whether there will be an increase or decrease of oxides of nitrogen.) Other approaches to the control of the tailpipe gases involve completing the combustion in muffler-type afterburners. One type relies on the ignition of gases with a sparkplug or pilot-burner; the second type passes the gases through a catalyst bed which burns the gases at a lower temperature than is possible with the direct-flame burners. (November 1962)

Installing an afterburner in the muffler has some drawbacks, not the least of which is that the notion of tooling around town with an open flame under the floorboards might give some people the willies. Instead, catalytic converters became required equipment on California cars in 1975.

In 1968, the Motor Vehicle Pollution Control Board became the California Air Resources Board, with Haagen-Smit as its chair. He was a member of the 1969 President's Task Force on Air Pollution, and the standards he helped those two bodies develop would eventually be adopted by the Environmental Protection Agency, established in 1970—the year that also saw the first celebration of Earth Day. It was also the year when ozone levels in the Los Angeles basin peaked at 0.58 parts per million, nearly five times in excess of the 0.12 parts per million that the EPA would declare to be safe for human health. This reading even exceeded the 0.5 ppm that Haagen-Smit had measured back in 1954, but it was a triumph nonetheless—the number of cars in L.A. had doubled, yet the smog was little worse than it had always been. That was the year we turned the corner, in fact, and our ozone levels have been dropping ever since—despite the continued influx of cars and people to the region.

Haagen-Smit retired from Caltech in 1971 as the skies began to clear, but continued to lead the fight for clean air until his death in 1977—of lung cancer, ironically, after a lifetime of cigarettes. Today, his intellectual heirs, including professors Richard Flagan, Mitchio Okumura, John Seinfeld, and Paul Wennberg, use analytical instruments descended from ones Haagen-Smit would have recognized and computer models sophisticated beyond his wildest dreams to carry the torch—a clean-burning one, of course—forward.

Writer: 
Douglas Smith
Writer: 
Exclude from News Hub: 
No

Pages

Subscribe to RSS - environment_and_sustainability