Two Caltech Faculty Inducted into the AAAS

Erik Winfree (PhD '98) and Jay R. Winkler (PhD '84) have been elected as Fellows of the American Association for the Advancement of Science (AAAS). Winfree, a professor of computer science, computation and neural systems, and bioengineering, was recognized by the AAAS for his "foundational contributions to biomolecular computing and molecular programming." Winkler is a faculty associate and lecturer in chemistry in the Division of Chemistry and Chemical Engineering, as well as the director of the Beckman Institute Laser Resource Center. He was elected for "distinguished contributions to the field of electron transfer chemistry and the development of its applications in biology, materials science, and solar energy."

Winfree's research with biological computing aims to "coax DNA into performing algorithmic tricks," he says. An algorithm is a collection of mechanistic rules—information—that directs the creation and organization of structure and behavior. In biology, information in DNA can be likened to an algorithm: it encodes instructions for biochemical networks, body plans, and brain architectures, and thus produces complex life. The Winfree group is developing molecular engineering methods that exploit the same principles as those used by biology: they study theoretical models of computation based on realistic molecular biochemistry, write software for molecular system design and analysis, and experimentally synthesize promising systems in the laboratory using DNA nanotechnology.

"We are seeking to create a kind of molecular programming language: a set of elementary components and methods for combining them into complex systems that involve self-assembled structures and dynamical behaviors," Winfree says. "DNA is capable of and can be rationally designed to perform a wide variety of tasks. We want to know if DNA is a sufficient building block for constructing arbitrarily complex and sophisticated molecular machines."

Winfree became an assistant professor at Caltech in 2000, an associate professor in 2006, and was named full professor in 2010. He was also named a MacArthur Fellow in 2000.

Winkler works on developing new methods for using laser spectroscopy to study chemical kinetics and the intermediate molecules that form during chemical reactions. In particular, his work involves experimental investigations of the factors that affect the rates of long-range electron-tunneling processes—the processes by which electrons are transported between atoms and molecules.

"Electron transfer reactions are fundamental processes in many chemical transformations, including electrochemical catalysis, solar energy conversion, and biological energy transduction," Winkler says. "In the Beckman Institute Laser Center, we have spent the past 25 years studying electron transfer reactions in small inorganic molecules and in metalloproteins"—proteins that contain metal atoms. "Our studies are aimed at experimentally elucidating the molecular factors that regulate the speed and efficiency of electron flow.

"I have been fortunate to work on these projects with many dedicated and talented students and postdoctoral scholars at Caltech. It is extremely gratifying to have this work recognized by the AAAS," he adds.

Following postdoctoral work at the Brookhaven National Laboratory, Winkler returned to Caltech as a Member of the Beckman Institute in 1990. He was first appointed as a lecturer in chemistry in 2002, and later a faculty associate in chemistry in 2008.

In addition to Winkler and Winfree, eight other Caltech alumni were named as AAAS Fellows: Edmund W. Bertschinger (BS '79), J. Edward Russo (BS '63), Mitchell Kronenberg (PhD '83), Donald P. Gaver III (BS '82), James W. Demmel II (BS '75), Jacqueline E. Dixon (PhD '92), Brian K. Lamb (PhD '78), and Shelly Sakiyama-Elbert (MS '98, PhD '00).

The AAAS is the world's largest general scientific society. This year, the AAAS awarded the distinction of Fellow to 347 of its members. New Fellows will be honored during the 2016 AAAS Annual Meeting in February.

Writer: 
Lori Dajose
Home Page Title: 
Two Caltech Faculty Inducted into the AAAS
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community
Teaser Image: 
Exclude from Home Page: 

In Memoriam: Xuji Fan (MS ’37, MS ’39)

Chinese aeronautical engineer and academic leader Xuji Fan (MS '37, MS '39) passed away on November 21. He was 102. The former president of Shanghai Jiao Tong University, one of the oldest and most prestigious centers of higher education in China, Fan is widely regarded as an early pioneer in aerodynamics and one of the architects of China's aeronautical education system.

Read more at alumni.caltech.edu

Images: 
Home Page Title: 
In Memoriam: Xuji Fan (MS ’37, MS ’39)
Exclude from News Hub: 
No
News Type: 
In Our Community
Exclude from Home Page: 
Home Page Summary: 
The Chinese aeronautical engineer and academic leader passed away on November 21. He was 102.

Schools Help Researchers Understand Quakes

In the event of a major earthquake in Los Angeles, first responders ideally would immediately have a map of the most intense shaking around the city—allowing them to send help to the hardest-hit areas first.

A new collaboration between Caltech researchers and schools of the Los Angeles Unified School District (LAUSD) provides a crucial step in the creation of such damage maps by vastly broadening the scope of a dense network of seismic sensors in the Los Angeles Basin.

To create an accurate shaking-intensity map, seismologists need to measure ground motion—which can vary from kilometer to kilometer because of differences in soil and earth structure—at many locations across the region. In 2011, Professor of Geophysics Rob Clayton and his colleagues, Professor of Engineering Seismology Tom Heaton and Simon Ramo Professor of Computer Science, Emeritus, K. Mani Chandy, began creating a web of such sensors via the Community Seismic Network (CSN), a program funded by the Gordon and Betty Moore Foundation.

The CSN consists of hundreds of small, inexpensive accelerometers—instruments that detect ground movements before, during, and after a seismic event—installed initially in the homes of volunteers in the greater Pasadena area. Since 2011, each device has been actively collecting and feeding seismic information to the CSN via its host house's Internet connection, allowing Clayton and his colleagues to create high-resolution maps of seismic activity in the western San Gabriel Valley. But the Caltech team wanted to find a way to expand the reach of the network throughout the earthquake-prone broader Los Angeles Basin. Their inventive solution? Integrate accelerometers into the infrastructure of L.A.'s public schools.

Through the efforts of Richard Guy, CSN project manager, sensors already have been installed in 100 LAUSD schools, covering an area ranging from northeast Los Angeles to downtown. CSN is now working to expand the project to include all of the district's more than 1,000 schools.

The new collaboration has the potential to help millions of people in Southern California when a big quake strikes. For example, data from the new network could be incorporated into the ShakeAlert early-warning system that is currently under development. Although no sensor can predict an earthquake, the accelerometers can detect an earthquake in one area of the L.A. Basin so quickly that an alert or warning could be sent to people in adjacent areas of the LA Basin before strong shaking arrives—potentially giving them enough time to find a safe spot.

The new dense network of sensors will also provide an improved map of shaking intensity for the whole region. The U.S. Geological Survey already provides a similar service called ShakeMap, which relies on sensors that are located several miles from one another and hence cannot provide a block-by-block resolution of shaking and possible damage. The new dense network of sensors has the potential to provide ShakeMap with a more accurate assessment of damage for response and recovery efforts.

"You can imagine a fire chief stepping out and saying, 'Wow. That was a big one. Now where do I go to help the community?' Obviously they want their focus to be where the maximum damage and danger is. They have other things to worry about too, but the best proxy for damage that we have is the level of shaking—and our dense network of sensors can provide that information," Clayton says.

But the new sensors do more than feed information into the network—they also provide valuable information to individual schools. "Principals have a particularly difficult problem in the event of an earthquake," Clayton says. "The first thing during a quake, of course, is to tell everyone to get under their desk. When the shaking stops, all of the kids are evacuated out of the school and into the schoolyard. And then what do principals do? At that point, they have to decide if it's safe enough to go back into the school, or if they should just send the kids home. But they do not know how badly the school is damaged."

The new school sensors could help inform this judgment call, Clayton says. Although they work in much the same way as those that were previously placed in volunteers' homes—recording ground accelerations and transmitting those data back to the researchers via an Internet connection—the sensors also contain an onboard computer that compares the event to a so-called fragility curve. Fragility curves provide predictions of the damage that a particular building would sustain under the shaking measured.

"Coupled with the fragility curve, the sensors could allow a school official to decide whether or not it is safe to reenter the school," Clayton says.

The Community Seismic Network's LAUSD collaboration was funded by the Gordon and Betty Moore Foundation. The network is a collaboration between Caltech's seismology, earthquake-engineering, and computer-science departments.

Home Page Title: 
Schools Help Scientists Understand Quakes
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 
Home Page Summary: 
A new collaboration between seismologists at Caltech and local public schools helps Los Angeles prepare for the "big one."
Monday, November 30, 2015

Microbial diners, drive-ins, and dives: deep-sea edition

Cancer Treatment in a Painless Patch

Chemotherapy is a life-saving medical intervention for millions of cancer patients, but the treatment is often not a pleasant experience. To kill off cancer cells, chemotherapy drugs must directly enter the patient's bloodstream and so they are administered intravenously. But are large, often painful needles the only reliable way to deliver the drugs?

Caltech senior Teo Wilkening, a mechanical engineering major in the Division of Engineering and Applied Science, spent this past summer testing the preliminary design of an alternative—and possibly much less painful—method: drug delivery through a patch.

Caltech's Mory Gharib, the Hans W. Liepmann Professor of Aeronautics and Bioinspired Engineering, first came up with the idea for the patch several years ago. Gharib's interest in painless drug delivery patches was renewed after a discussion with M. Houman Fekrazad, a cancer specialist at the City of Hope in Duarte, California. When Wilkening joined the Gharib lab in June as part of the Summer Undergraduate Research Fellowships (SURF) program, Gharib encouraged him to come up with a way to design and test the feasibility of such a patch.

"When we started thinking about designing a chemotherapy patch, we split the project into two main parts," Wilkening says. One part is to create a compartment that holds the fluid or medicine; the second is the design of a needle-like device to physically deliver the medicine into the patient's bloodstream. "Over the summer, I started working on the needles," he says.

Any chemotherapy delivery device must provide a way for the drug to get through the skin and into the blood. To avoid the pain caused by the large needle traditionally used for such an intravenous injection, Gharib envisioned a patch containing hundreds of micrometer-scale needles, too small in diameter to be sensed by the nerves in the skin. Wilkening wanted to test how efficiently the tiny needles could actually deliver a drug.

Skin is made of three layers—the epidermal, dermal, and subdermal layers. For a drug to enter the bloodstream, it must be delivered into the bottom, or subdermal, layer. From there, Wilkening explains, "it can be distributed throughout the body, instead of pooling up and killing the cells around the injection site. We wanted to develop a way for the micrometer-scale needles to routinely deliver medicine to this bottom layer."

Wilkening hoped to exploit the fact that each of the three skin layers has a different resistance level. The outer skin layer, the epidermis, is the stiffest of the three; the middle layer, the dermis, is of intermediate stiffness; and the subdermal layer is the easiest to penetrate.

To test how this resistance would affect the flow of a fluid—like a solution carrying a cancer-killing drug—Wilkening created a large-scale model of the microneedles using a pair of microliter glass pipettes. In the model, liquid flows from a common reservoir and into both pipettes at the same rate. To simulate the resistance to flow that would be present in needles in a patch, Wilkening added viscous materials, such as gelatin, to the end of both of the pipettes and then inserted them into separate gels representing the different layers of skin. By varying the stiffness of the gels, he was able to determine the likely behavior of the flow coming from the patch under the condition that one needle penetrates deep enough to the subdermal layer and the other does not. "The liquid flow penetrated through one needle or the other depending on the difference in the stiffness of the skin-like gels, generally through the less stiff one," he says.

Although he spent the majority of his summer perfecting the setup of this experiment and only a little over a week in the actual testing phase, Wilkening's preliminary results suggest that the concept behind the patch is sound. That is, once the fluid meets resistance in one needle, it will follow the path of less resistance and will flow into the other needle. That means that in a patch composed of many hundreds of needles, a drug should be deliverable directly into the subdermal layer and able to reach the patient's bloodstream precisely because it does not as easily flow into the two layers above the subdermis.

While his SURF project is now technically over, Wilkening—who is also a teaching assistant in the mechanical engineering shop and the captain of the Caltech soccer team—says he will be continuing his work with Gharib during the school year.

"I hope to see this project through a little bit more," he says. "In my two previous SURF projects I worked on existing systems. This year was very different because nobody has done this before. It is kind of cool having a chance to own my own project and to use my own inspiration and ideas to really build it up from the bottom."

Home Page Title: 
Cancer Treatment in a Painless Patch
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 
Home Page Summary: 
SURF student Teo Wilkening spent his summer developing a chemotherapy patch that could one day take pain out of the equation.

Elachi to Retire as JPL Director

Charles Elachi (MS '69, PhD '71) has announced his intention to retire as director of the Jet Propulsion Laboratory on June 30, 2016, and move to campus as professor emeritus. A national search is underway to identify his successor.

"A frequently consulted national and international expert on space science, Charles is known for his broad expertise, boundless energy, conceptual acuity, and deep devotion to JPL, campus, and NASA," said Caltech president Thomas F. Rosenbaum in a statement to the Caltech community. "Over the course of his 45-year career at JPL, Charles has tirelessly pursued new opportunities, enhanced the Laboratory, and demonstrated expert and nimble leadership. Under Charles' leadership over the last 15 years, JPL has become a prized performer in the NASA system and is widely regarded as a model for conceiving and implementing robotic space science missions."

With Elachi at JPL's helm, an array of missions has provided new understanding of our planet, our moon, our sun, our solar system, and the larger universe. The GRAIL mission mapped the moon's gravity; the Genesis space probe returned to Earth samples of the solar wind; Deep Impact intentionally collided with a comet; Dawn pioneered the use of ion propulsion to visit the asteroids Ceres and Vesta; and Voyager became the first human-made object to reach interstellar space. A suite of missions to Mars, from orbiters to the rovers Spirit, Opportunity, and Curiosity, has provided exquisite detail of the red planet; Cassini continues its exploration of Saturn and its moons; and the Juno spacecraft, en route to a July 2016 rendezvous, promises to provide new insights about Jupiter. Missions such as the Galaxy Evolution Explorer, the Spitzer Space Telescope, Kepler, WISE, and NuSTAR have revolutionized our understanding of our place in the universe.

Future JPL missions developed under Elachi's guidance include Mars 2020, Europa Clipper, the Asteroid Redirect Mission, Jason 3, Aquarius, OCO-2, SWOT, and NISAR.

Elachi joined JPL in 1970 as a student intern and was appointed director and Caltech vice president in 2001. During his more than four decades at JPL, he led a team that pioneered the use of space-based radar imaging of the Earth and the planets, served as principal investigator on a number of NASA-sponsored studies and flight projects, authored more than 230 publications in the fields of active microwave remote sensing and electromagnetic theory, received several patents, and became the director for space and earth science missions and instruments. At Caltech, he taught a course on the physics of remote sensing for nearly 20 years

Born in Lebanon, Elachi received his B.Sc. ('68) in physics from University of Grenoble, France and the Dipl. Ing. ('68) in engineering from the Polytechnic Institute, Grenoble. In addition to his MS and PhD degrees in electrical science from Caltech, he also holds an MBA from the University of Southern California and a master's degree in geology from UCLA.

Elachi was elected to the National Academy of Engineering in 1989 and is the recipient of numerous other awards including an honorary doctorate from the American University of Beirut (2013), the National Academy of Engineering Arthur M. Bueche Award (2011), the Chevalier de la Légion d'Honneur from the French Republic (2011), the American Institute of Aeronautics and Astronautics Carl Sagan Award (2011), the Royal Society of London Massey Award (2006), the Lebanon Order of Cedars (2006 and 2012), the International von Kármán Wings Award (2007), the American Astronautical Society Space Flight Award (2005), the NASA Outstanding Leadership Medal (2004, 2002, 1994), and the NASA Distinguished Service Medal (1999).

Home Page Title: 
Elachi to Retire as JPL Director
Exclude from News Hub: 
No
News Type: 
In Our Community
Exclude from Home Page: 
Home Page Summary: 
He will move to campus as professor emeritus. A national search is underway to identify his successor.

Building a Microscope to Search for Signs of Life on Other Worlds

In March of this year, a team of bioengineers from Caltech, JPL, and the University of Washington spent a week in Greenland, using snowmobiles to haul their scientific equipment, waiting out windstorms, and spending hours working on the ice. Now the same researchers are planning a trip to California's Mojave Desert, where they will study Searles Lake, a dry, extremely salty basin that is naturally full of harsh chemicals like arsenic and boron. The researchers are testing a holographic microscope that they have designed and built for the purpose of observing microbes that thrive in such extreme environments. The ultimate goal? To send the microscope on a spacecraft to search for biosignatures—signs of life—on other worlds such as Mars or Saturn's icy moon Enceladus.

"Our big overarching hypothesis is that motility is a good biosignature," explains Jay Nadeau, a scientific researcher at Caltech and one of the investigators on the holographic microscope project, dubbed SHAMU (Submersible Holographic Astrobiology Microscope with Ultraresolution). "We suspect that if we send back videos of bacteria swimming, that is going to be a better proof of life than pretty much anything else."

Think, she says, of Antonie van Leeuwenhoek, the father of microbiology, who used simple microscopes in the 17th and 18th centuries to observe protozoa and bacteria. "He immediately recognized that they were living things based on the way they moved," Nadeau says. Indeed, when Leeuwenhoek wrote about observing samples of the plaque between his teeth, he described seeing "many very little animalcules, very prettily a-moving." And Nadeau adds, "No one doubted Leeuwenhoek once they saw them moving for themselves."

In order to capture images of microbes "a-moving" on another world, Nadeau and her colleagues, including Mory Gharib, the Hans W. Liepmann Professor of Aeronautics and Bioinspired Engineering and a vice provost at Caltech, had the idea to use digital holography rather than conventional microscopy.

Holography is a method for recording holistic information about the light bouncing off a sample so that a 3-D image can be reconstructed at some later time. Compared to microscopy, which often involves multiple lenses focusing over a shallow sample (on a slide, for example), holography offers the advantages of focusing over a relatively large volume and of capturing high-resolution images, without the trouble of moving parts that could break in extreme environments or during a launch or landing, if the instrument were sent into space.

Standard photography records only the intensity of the light (related to its amplitude) that reaches a camera lens after scattering off an object. But as a wave, light has both an amplitude and a phase, a separate property that can be used to tell how far the light travels once it is scattered. Holography is a technique that captures both—something that makes it possible to re-create a three-dimensional image of a sample.

To understand the technique, first imagine dropping a pebble in a pond and watching ripples emanate from that spot. Now imagine dropping a second pebble in a new spot, producing a second set of ripples. If the ripples interact with an object on the surface, such as a rock, the ripples are diffracted or scattered by the object, changing the pattern of the waves—an effect that can be detected. Holography is akin to dropping two pebbles in a pond simultaneously, with the pebbles being two laser beams—one a reference beam that shines unaffected by the sample, and an object beam that runs into the sample and gets diffracted or scattered. A detector measures the combination, or superposition, of the ripples from the two beams, which is known as the interference pattern. By knowing how the waves propagate and by analyzing the interference pattern, a computer can reconstruct what the object beam encountered as it traveled

"We can take an interference pattern and use that to reconstruct all of the images in different planes in a volume," explains Chris Lindensmith, a systems engineer at JPL and an investigator on the project. "So we can just go and reconstruct whatever plane we are interested in after the fact and look and see if there's anything in there."

That means that a single image captures all the microbes in a sample—whether there is one bacterium or a thousand. And by taking a series of such images over time, the researchers can reconstruct the path that each bacterium took as it swam in the sample.

That would be virtually impossible with conventional microscopy, says Lindensmith. With microscopy, you need to focus in real time, meaning that someone would have to turn a dial to move the sample closer or farther from the microscope's lenses in order to keep a particular microbe in focus. During that time, they would miss out on the movements of any other microbes in the sample because the focus is so small.

All of the advantages that the holographic microscope offers over microscopy make it appealing for studies elsewhere in the solar system. And there are a number of worlds that scientists are eager to study in close-up detail to search for signs of life. In 2008, using data from the Phoenix Mars lander, scientists determined that there is water ice just below the surface in the northern plains of the Red Planet, making the locale a candidate for follow-up sampling studies. In addition, both the jovian moon Europa and the saturnian moon Enceladus are thought to harbor liquid oceans beneath their icy surfaces. Therefore, the SHAMU group says, a compact, robust, microscope like the one the Caltech team is developing could be a highly desirable component of an instrument suite on a lander to any one of those locations.

Nadeau says the group's prototype performed well during the team's field-testing trip to Greenland. At each testing site, the researchers drilled a hole into the sea ice, submerged the microscope to a depth where some of the salty liquid water trapped inside the ice, called brine, was able to seep into the device's sample area, and collected holographic images. "We know that things live in the water and we know what they do and how they swim," says Nadeau. "But believe it or not, nobody knew what kinds of microorganisms live in sea-ice brine or if they can swim."

That is because typical techniques for counting, labeling, and observing microbes rely on fragile instrumentation and often require large amounts of power, making them unusable in extreme environments like the Arctic. As a result, "nobody had ever looked at sea-ice organisms immediately after collection like we did," says Stephanie Rider, a staff scientist at Caltech who went on the Greenland trip as part of the project. Previously, other teams have collected samples and taken them back to a lab where the samples have been stored in a freezer, sometimes for weeks at a time. "Who knows how much the samples have been warmed up and cooled down by the time someone studies them?" Rider says. "The samples could be totally different at that point."


When samples are returned to the laboratory, fed rich medium, and warmed to +4 degrees, swimming speeds are greatly increased.
Credit: Jay Nadeau/Caltech

During the Greenland trip, the SHAMU group successfully collected images that have been used to construct videos of bacteria and algae that live in the sea-ice brine. They also brought samples back to a lab in Nuuk, Greenland, warmed them overnight, and fed them bacterial growth medium—duplicating the standard conditions under which microorganisms from sea ice have been studied in the past. The researchers found that under those conditions, "everything starts zipping around like crazy," says Nadeau, indicating that in order to be accurate, observations do need to be made in place on the ice rather than back in a lab.

The team is particularly excited about what the successful measurements from Greenland could mean in the context of Mars. "We know from this that we can tell that things are alive when you take them straight out of ice," says Nadeau. "If we can see life in there on Earth, then it's possible there might be life in pockets of ice on Mars as well. Perhaps you don't have to have a big liquid ocean to find living organisms; there's a possibility that things can live just in pockets of ice."

The three-year SHAMU project began in January 2014 with funding from the Gordon and Betty Moore Foundation. In the coming months, the engineers hope to improve the microscope's sample chamber and to scale down the entire device. They believe they will have a launch-ready instrument by the end of the funding period.

As a first test in space, they would like to send the instrument to the International Space Station not only to see how it behaves in space but also to observe microbial samples under zero-gravity conditions. Beyond that, they hope to include SHAMU on a Mars lander as part of a NASA Discovery mission aimed at searching for biosignatures in the frozen northern plains of Mars. The Caltech team is partnering with Honeybee Robotics, a company that has built drills and sampling systems for numerous NASA missions (including the Phoenix Mars lander), to integrate the holographic microscope on a drill that would bore down about three feet into the martian ground ice.

In addition to Nadeau, Gharib, and Lindensmith, Jody Deming of the University of Washington's School of Oceanography is also an investigator on the SHAMU project.

Writer: 
Kimm Fesenmaier
Home Page Title: 
A Microscope to Search for Life on Other Worlds
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 
Home Page Summary: 
If microbial life exists elsewhere in the solar system, wouldn't we like to actually see it on the move?

Peters Named New Director of Resnick Sustainability Institute

Jonas C. Peters, the Bren Professor of Chemistry, has been appointed director of the Resnick Sustainability Institute. Launched in 2009 with an investment from philanthropists Stewart and Lynda Resnick and located in the Jorgenson Laboratory on the Caltech campus, the Resnick Institute concentrates on transformational breakthroughs that will contribute to the planet's sustainability over the long term.

The Resnick Sustainability Institute, which involves both the Chemistry and Chemical Engineering and Engineering and Applied Science divisions, serves as a prime example of the multidisciplinary approach prized by Caltech.

"Some of the most important challenges in sustainability are also among the most complex," says Peters, who has been a member of the Caltech faculty since 1999. "We are committed to working on problems that are uniquely suited to the Caltech environment. This means starting with fundamentals and leveraging the cross-catalysis of ideas and creativity of this campus to come up with ways to have substantial impact."

Because the world's natural resources are dwindling, Peters wants to continue focusing the Resnick Institute's efforts on efficient energy generation, storage, and use. Some current projects include development of advanced photovoltaics, photoelectrochemical solar fuels and cellulosic biofuels; energy conversion work on batteries and fuel cells; and efficiency in industrial catalysis and advanced research on electrical grid control and distribution.

In addition, the Resnick Institute is exploring new opportunities in the area of water sustainability. In September, the institute hosted a workshop entitled "Water Resilience and Sustainability: Can We Make LA Water Self-Sufficient?" The workshop examined the long-term potential for sustainable water use in urban environments, using the Los Angeles area as a case study.

"The Resnick Sustainability Institute is continuing to build one of the great centers for sustainability research," says Peters. "We are doing this by supporting the most talented young scientists and engineers committed to tackling the fascinating, critical, and yet very difficult challenges of this field."

Home Page Title: 
New Resnick Director Appointed
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community
Exclude from Home Page: 
Home Page Summary: 
Jonas C. Peters has been appointed director of the Resnick Sustainability Institute.
Friday, October 30, 2015
Beckman Institute Auditorium – Beckman Institute

Teaching Statement Workshop

Toward a Smarter Grid

Steven Low, professor of computer science and electrical engineering at Caltech, says we are on the cusp of a historic transformation—a restructuring of the energy system similar to the reimagining and revamping that the communication and computer networks experienced over the last two decades, making them layered, with distributed and interconnected intelligence everywhere.

The power network of the future—aka the smart grid—will have to be much more dynamic and responsive than the current electric grid, handling tremendous loads while incorporating intermittent energy production from renewable resources such as wind and solar, all while ensuring that when you or I flip a switch at home or work, the power still comes on without fail.

The smart grid will also be much more distributed than the current network, which controls a relatively small number of generators to provide power to millions of passive endpoints—the computers, machines, buildings, and more that simply consume energy. In the future, thanks to inexpensive sensors and computers, many of those endpoints will become active and intelligent loads like smart devices, or distributed generators such as solar panels and wind turbines. These endpoints will be able to generate, sense, communicate, compute, and respond.

Given these trends, Low says, it is only reasonable to conclude that in the coming decades, the electrical system is likely to become "the largest and most complex cyberphysical system ever seen." And that presents both a risk and an opportunity. On the one hand, if the larger, more active system is not controlled correctly, blackouts could be much more frequent. On the other hand, if properly managed, it could greatly improve efficiency, security, robustness, and sustainability.

At Caltech, Low and an interdisciplinary group of engineers, economists, mathematicians, and computer scientists pulled together by the Resnick Sustainability Institute, along with partners like Southern California Edison and the Department of Energy, are working to develop the devices, systems, theories, and algorithms to help guide this historic transformation and make sure that it is properly managed.

In 2012, the Resnick Sustainability Institute issued a report titled Grid 2020: Towards a Policy of Renewable and Distributed Energy Resources, which focused on some of the major engineering, economic, and policy issues of the smart grid. That report led to a discussion series and working sessions that in turn led to the publication in 2014 of another report called More Than Smart: A Framework to Make the Distribution Grid More Open, Efficient and Resilient.

"One thing that makes the smart grid problem particularly appealing for us is that you can't solve it just as an engineer, just as a computer scientist, just as a control theorist, or just as an economist," says Adam Wierman, professor of computer science and Executive Officer for the Computing and Mathematical Sciences Department. "You actually have to bring to bear tools from all of these areas to solve the problem."

For example, he says, consider the problem of determining how much power various parts of the grid should generate at a particular time. This requires generating an amount of power that matches or closely approximates the amount of electricity demanded by customers. Currently this involves predicting electricity demand a day in advance, updating that prediction several hours before it is needed, and then figuring out how much nuclear power, natural gas, or coal will be produced to meet the demand. That determination is made through markets. In California, the California Independent System Operator runs a day-ahead electricity market in which utility companies and power plants buy and sell power generation for the following day. Then any small errors in the prediction are fixed at the last minute by engineers in a control office, with markets completely out of the picture.

"So you have a balance between the robustness and certainty provided by engineered control and the efficiency provided by markets and economic control," says Wierman. "But when renewable energy comes onto the table, all of a sudden the predictions of energy production are much less accurate, so the interaction between the markets and the engineering is up in the air, and no one knows how to handle this well." This, he says, is the type of problem the Caltech team, with its interdisciplinary approach, is uniquely equipped to address.

Indeed, the Caltech smart grid team is working on projects on the engineering side, projects on the markets side, and projects at the interface.

On the engineering side, a major project has revolved around a complex mathematical problem called optimal power flow that underlies many questions dealing with power system operations and planning. "Optimal power flow can tell you when things should be on or conserving energy, how to stabilize the voltage in the network as solar or wind generation fluctuates, or how to set your thermostat so that you maintain comfort in your building while stabilizing the voltage on the grid," explains Mani Chandy, the Simon Ramo Professor of Computer Science, Emeritus. "The problem has been around for 50 years but is extremely difficult to solve."

Chandy worked with Low; John Doyle, the Jean-Lou Chameau Professor of Control and Dynamical Systems, Electrical Engineering, and Bioengineering; and a number of Caltech students to devise a clever way to solve the problem, allowing them, for the first time, to compute a solution and then check whether that solution is globally optimal.

"We said, let's relax the constraints and optimize the cost over a bigger set that we can design to be solvable," explains Low. For example, if a customer is consuming electricity at a single location, the problem might ask how much electricity that individual is actually consuming; a relaxation would say that that person is consuming no more than a certain amount—it is a way of adding flexibility to a problem with tight constraints. "Almost magically, it turns out that if I design my physical set in a clever way, the solution for this larger simple set turns out to be the same as it would be for the original set."

The new approach produces a feasible solution for almost all distribution systems—the low-voltage networks that take power from larger substations and ultimately deliver it to the houses, buildings, street lights, and so on in a region. "That's important because many of the innovations in the energy sector in the coming decade will happen on distribution systems," says Low.

Another Caltech project attempts to predict how many home and business owners are likely to adopt rooftop solar panels over the next 5, 10, 20, or 30 years. In Southern California, the number of solar installations has increased steadily for several years. For planning purposes, utility companies need to anticipate whether that growth will continue and at what pace. For example, Low says, if the network is eventually going to comprise 15 or 20 percent renewables, then the current grid is robust enough. "But if we are going to have 50 or 80 percent renewables," he says, "then the grid will need huge changes in terms of both engineering and market design."

Working with Chandy, graduate students Desmond Cai and Anish Agarwal (BS '13, MS '15) developed a new model for predicting how many homes and businesses will install rooftop solar panels. The model has proven highly accurate. Researchers believe that whether or not people "go solar" depends largely on two factors: how much money they will save and their confidence in the new technology. The Caltech model, completed in 2012, indicates that the amount of money that people can save by installing rooftop solar has a huge influence on whether they will adopt the technology. Based on their research, the team has also developed a web-based tool that predicts how many people will install solar panels using a utility company's data. Southern California Edison's planning department is actively using the tool.

On the markets side, Caltech researchers are doing theoretical work looking at the smart grid and the network of markets it will produce. Electricity markets can be both complicated and interesting to study because unlike a traditional market—a single place where people go to buy and sell something—the electricity "market" actually consists of many networked marketplaces interacting in complicated ways.

One potential problem with this system and the introduction of more renewables, Wierman says, is that it opens the door for firms to manipulate prices by turning off generators. Whereas the operational status of a normal generator can be monitored, with solar and wind power, it is nearly impossible to verify how much power should have been produced because it is difficult to know whether it was windy or sunny at a certain time. "For example, you can significantly impact prices by pushing—or not pushing—solar energy from your solar farm," Wierman says. "There are huge opportunities for strongly manipulating market structure and prices in these environments. We are beginning to look at how to redesign markets so that this isn't as powerful or as dangerous."

An area of smart grid research where the Caltech team takes full advantage of its multidisciplinary nature is at the interface of engineering and markets. One example is a concept known as demand response, in which a mismatch between energy supply and demand can be addressed from the demand side (that is, by involving consumers), rather than from the power-generation side.

As an example of demand response, some utilities have started programs where participants, who have smart thermostats installed in their homes in exchange for some monetary reward, allow the company to turn off their air conditioners for a short period of time when it is necessary to reduce the demand on the grid. In that way, household air conditioners become "shock absorbers" for the system.

"But the economist says wait a minute, that's really inefficient. You might be turning the AC off for people who desperately want it on and leaving it on for people who couldn't care less," says John Ledyard, the Allen and Lenabelle Davis Professor of Economics and Social Sciences. A counter proposal is called Prices to Devices, where the utility sends price signals to devices, like thermostats, in homes and offices, and customers decide if they want to pay for power at those prices. Ledyard says while that is efficient rationing in equilibrium, it introduces a delay between the consumer and the utility, creating an instability in the dynamics of the system.

The Caltech team has devised an intermediate proposal that removes the delay in the system. Rather than sending a price and having consumers react to it, their program has consumers enter their sensitivity to various prices ahead of time, right on their smart devices. This can be done with a single number. Then those devices deliver that information to the algorithm that operates the network. For example, a consumer might program his or her smart thermostat, to effectively say, "If a kilowatt of power costs $1 and the temperature outside is 90 degrees, I want you to keep the air conditioner on; if the price is $5 and the temperature outside is 80 degrees, go ahead and turn it off."

"The consumer's response is handled by the algorithm, so there's no lag," says Ledyard.

Currently, the Caltech smart grid team is working closely with Southern California Edison to set up a pilot test in Orange County involving several thousand households. The homes will be equipped with various distributed energy resources including rooftop solar panels, electric vehicles, smart thermostats for air conditioners, and pool pumps. The team's new approach to the optimal power flow problem and demand response will be tested to see whether it can keep stable a miniature version of the future smart grid.

Such experiments are crucial for preparing for the major changes to the electrical system that are certainly coming down the road, Low says. "The stakes are high. In the face of this historic transformation, we need to do all that we can to minimize the risk and make sure that we realize the full potential."

Writer: 
Kimm Fesenmaier
Home Page Title: 
Toward a Smarter Grid
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 

Pages