Long-Term Contraception in a Single Shot

Caltech biologists have developed a nonsurgical method to deliver long-term contraception to both male and female animals with a single shot. The technique—so far used only in mice—holds promise as an alternative to spaying and neutering feral animals.

The approach was developed in the lab of Bruce Hay, professor of biology and biological engineering at Caltech, and is described in the October 5 issue of Current Biology. The lead author on the paper is postdoctoral scholar Juan Li.

Hay's team was inspired by work conducted in recent years by David Baltimore and others showing that an adeno-associated virus (AAV)—a small, harmless virus that is unable to replicate on its own, that has been useful in gene-therapy trials—can be used to deliver sequences of DNA to muscle cells, causing them to produce specific antibodies that are known to fight infectious diseases, such as HIV, malaria, and hepatitis C.

Li and her colleagues thought the same approach could be used to produce infertility. They used an AAV to deliver a gene that directs muscle cells to produce an antibody that neutralizes gonadotropin-releasing hormone (GnRH) in mice. GnRH is what the researchers refer to as a "master regulator of reproduction" in vertebrates—it stimulates the release of two hormones from the pituitary that promote the formation of eggs, sperm, and sex steroids. Without it, an animal is rendered infertile.

In the past, other teams have tried neutralizing GnRH through vaccination. However, the loss of fertility that was seen in those cases was often temporary. In the new study, Hay and his colleagues saw that the mice—both male and female—were unable to conceive after about two months, and the majority remained infertile for the remainder of their lives.

"Inhibiting GnRH is an ideal way to inhibit fertility and behaviors caused by sex steroids, such as aggression and territoriality," says Hay. He notes that in the study, his team also shows that female mice can be rendered infertile using a different antibody that targets a binding site for sperm on the egg. "This target is ideal when you want to inhibit fertility but want to leave the individual otherwise completely normal in terms of reproductive behaviors and hormonal cycling."

Hay's team has dubbed the new approach "vectored contraception" and says that there are many other proteins that are thought to be important for reproduction that might also be targeted by this technique.

The researchers are particularly excited about the possibility of replacing spay–neuter programs with single injections. "Spaying and neutering of animals to control fertility, unwanted behavior, and population numbers of feral animals is costly and time consuming, and therefore often doesn't happen," says Hay. "There is a strong desire in many parts of the world for quick, nonsurgical approaches to inhibiting fertility. We think vectored contraception provides such an approach."

As a next step, Hay's team is working with Bill Swanson, director of animal research at the Cincinnati Zoo's Center for Conservation and Research of Endangered Wildlife, to try this approach in female domestic cats. Swanson's team spends much of its time working to promote fertility in endangered cat species, but it is also interested in developing humane ways of managing populations of feral domestic cats through inhibition of fertility, as these animals are often otherwise trapped and euthanized.

Additional Caltech authors on the paper, "Vectored antibody gene delivery mediates long-term contraception," are Alejandra I. Olvera, Annie Moradian, Michael J. Sweredoski, and Sonja Hess. Omar S. Akbari is also a coauthor on the paper and is now at UC Riverside. Some of the work was completed in the Proteome Exploration Laboratory at Caltech, which is supported by the Gordon and Betty Moore Foundation, the Beckman Institute, and the National Institutes of Health. Olvera was supported by a Gates Millennium Scholar Award.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

New Polymer Creates Safer Fuels

Before embarking on a transcontinental journey, jet airplanes fill up with tens of thousands of gallons of fuel. In the event of a crash, such large quantities of fuel increase the severity of an explosion upon impact. Researchers at Caltech and JPL have discovered a polymeric fuel additive that can reduce the intensity of postimpact explosions that occur during accidents and terrorist acts. Furthermore, preliminary results show that the additive can provide this benefit without adversely affecting fuel performance.

The work is published in the October 2 issue of the journal Science.

Jet engines compress air and combine it with a fine spray of jet fuel. Ignition of the mixture of air and jet fuel by an electric spark triggers a controlled explosion that thrusts the plane forward. Jet airplanes are powered by thousands of these tiny explosions. However, the process that distributes the spray of fuel for ignition—known as misting—also causes fuel to rapidly disperse and easily catch fire in the event of an impact.

The additive, created in the laboratory of Julia Kornfield (BS '83), professor of chemical engineering, is a type of polymer—a long molecule made up of many repeating subunits—capped at each end by units that act like Velcro. The individual polymers spontaneously link into ultralong chains called "megasupramolecules."

Megasupramolecules, Kornfield says, have an unprecedented combination of properties that allows them to control fuel misting, improve the flow of fuel through pipelines, and reduce soot formation. Megasupramolecules inhibit misting under crash conditions and permit misting during fuel injection in the engine.

Other polymers have shown these benefits, but have deficiencies that limit their usefulness. For example, ultralong polymers tend to break irreversibly when passing through pumps, pipelines, and filters. As a result, they lose their useful properties. This is not an issue with megasupramolecules, however. Although supramolecules also detach into smaller parts as they pass through a pump, the process is reversible. The Velcro-like units at the ends of the individual chains simply reconnect when they meet, effectively "healing" the megasupramolecules.

High-speed video showing untreated jet fuel (upper half) and jet fuel treated with 0.3% Caltech polymer (lower half) after a 140 mph projectile impact disperses fuel mist over continuously burning propane torches. The fireball formed by jet fuel is absent for fuel treated with Caltech polymer.
Credit: Caltech/JPL

When added to fuel, megasupramolecules dramatically affect the flow behavior even when the polymer concentration is too low to influence other properties of the liquid. For example, the additive does not change the energy content, surface tension, or density of the fuel. In addition, the power and efficiency of engines that use fuel with the additive is unchanged—at least in the diesel engines that have been tested so far.

When an impact occurs, the supramolecules spring into action. The supramolecules spend most of their time coiled up in a compact conformation. When there is a sudden elongation of the fluid, however, the polymer molecules stretch out and resist further elongation. This stretching allows them to inhibit the breakup of droplets under impact conditions—thus reducing the size of explosions—as well as to reduce turbulence in pipelines.

"The idea of megasupramolecules grew out of ultralong polymers," says research scientist and co–first author Ming-Hsin "Jeremy" Wei (PhD '14). "In the late 1970s and early 1980s, polymer scientists were very enthusiastic about adding ultralong polymers to fuel in order to make postimpact explosions of aircrafts less intense." The concept was tested in a full-scale crash test of an airplane in 1984. The plane was briefly engulfed in a fireball, generating negative headlines and causing ultralong polymers to quickly fall out of favor, Wei says.

In 2002, Virendra Sarohia (PhD '75) at JPL sought to revive research on mist control in hopes of preventing another attack like that of 9-11. "He reached out to me and convinced me to design a new polymer for mist control of jet fuel," says Kornfield, the corresponding author on the new paper. The first breakthrough came in 2006 with the theoretical prediction of megasupramolecules by Ameri David (PhD '08), then a graduate student in her lab. David designed individual chains that are small enough to eliminate prior problems and that dynamically associate together into megasupramolecules, even at low concentrations. He suggested that these assemblies might provide the benefits of ultralong polymers, with the new feature that they could pass through pumps and filters unharmed.

When Wei joined the project in 2007, he set out to create these theoretical molecules. Producing polymers of the desired length with sufficiently strong "molecular Velcro" on both ends proved to be a challenge. With the help of a catalyst developed by Robert Grubbs, the Victor and Elizabeth Atkins Professor of Chemistry and winner of the 2005 Nobel Prize in Chemistry, Wei developed a method to precisely control the structure of the molecular Velcro and put it in the right place on the polymer chains.

Integration of science and engineering was the key to success. Simon Jones, an industrial chemist now at JPL, helped Wei develop practical methods to produce longer and longer chains with the Velcro-like end groups. Co–first author and Caltech graduate student Boyu Li helped Wei explore the physics behind the exciting behavior of these new polymers. Joel Schmitigal, a scientist at the U.S. Army Tank Automotive Research Development and Engineering Center (TARDEC) in Warren, Michigan, performed essential tests that put the polymer on the path toward approval as a new fuel additive.

"Looking to the future, if you want to use this additive in thousands of gallons of jet fuel, diesel, or oil, you need a process to mass-produce it," Wei says. "That is why my goal is to develop a reactor that will continuously produce the polymer—and I plan to achieve it less than a year from now."

"Above all," Kornfield says, "we hope these new polymers will save lives and minimize burns that result from postimpact fuel fires."

The findings are published in a paper titled "Megasupramolecules for safer, cleaner fuel by end association of long telechelic polymers." The work was funded by TARDEC, the Federal Aviation Administration, the Schlumberger Foundation, and the Gates Grubstake Fund.

Exclude from News Hub: 
News Type: 
Research News

Flowing Electrons Help Ocean Microbes Gulp Methane

Good communication is crucial to any relationship, especially when partners are separated by distance. This also holds true for microbes in the deep sea that need to work together to consume large amounts of methane released from vents on the ocean floor. Recent work at Caltech has shown that these microbial partners can still accomplish this task, even when not in direct contact with one another, by using electrons to share energy over long distances.

This is the first time that direct interspecies electron transport—the movement of electrons from a cell, through the external environment, to another cell type—has been documented in microorganisms in nature.

The results were published in the September 16 issue of the journal Nature.

"Our lab is interested in microbial communities in the environment and, specifically, the symbiosis—or mutually beneficial relationship—between microorganisms that allows them to catalyze reactions they wouldn't be able to do on their own," says Professor of Geobiology Victoria Orphan, who led the recent study. For the last two decades, Orphan's lab has focused on the relationship between a species of bacteria and a species of archaea that live in symbiotic aggregates, or consortia, within deep-sea methane seeps. The organisms work together in syntrophy (which means "feeding together") to consume up to 80 percent of methane emitted from the ocean floor—methane that might otherwise end up contributing to climate change as a greenhouse gas in our atmosphere.

Previously, Orphan and her colleagues contributed to the discovery of this microbial symbiosis, a cooperative partnership between methane-oxidizing archaea called anaerobic methanotrophs (or "methane eaters") and a sulfate-reducing bacterium (organisms that can "breathe" sulfate instead of oxygen) that allows these organisms to consume methane using sulfate from seawater. However, it was unclear how these cells share energy and interact within the symbiosis to perform this task.

Because these microorganisms grow slowly (reproducing only four times per year) and live in close contact with each other,  it has been difficult for researchers to isolate them from the environment to grow them in the lab. So, the Caltech team used a research submersible, called Alvin, to collect samples containing the methane-oxidizing microbial consortia from deep-ocean methane seep sediments and then brought them back to the laboratory for analysis.

The researchers used different fluorescent DNA stains to mark the two types of microbes and view their spatial orientation in consortia. In some consortia, Orphan and her colleagues found the bacterial and archaeal cells were well mixed, while in other consortia, cells of the same type were clustered into separate areas.

Orphan and her team wondered if the variation in the spatial organization of the bacteria and archaea within these consortia influenced their cellular activity and their ability to cooperatively consume methane. To find out, they applied a stable isotope "tracer" to evaluate the metabolic activity. The amount of the isotope taken up by individual archaeal and bacterial cells within their microbial "neighborhoods" in each consortia was then measured with a high-resolution instrument called nanoscale secondary ion mass spectrometry (nanoSIMS) at Caltech. This allowed the researchers to determine how active the archaeal and bacterial partners were relative to their distance to one another.

To their surprise, the researchers found that the spatial arrangement of the cells in consortia had no influence on their activity. "Since this is a syntrophic relationship, we would have thought the cells at the interface—where the bacteria are directly contacting the archaea—would be more active, but we don't really see an obvious trend. What is really notable is that there are cells that are many cell lengths away from their nearest partner that are still active," Orphan says.

To find out how the bacteria and archaea were partnering, co-first authors Grayson Chadwick (BS '11), a graduate student in geobiology at Caltech and a former undergraduate researcher in Orphan's lab, and Shawn McGlynn, a former postdoctoral scholar, employed spatial statistics to look for patterns in cellular activity for multiple consortia with different cell arrangements. They found that populations of syntrophic archaea and bacteria in consortia had similar levels of metabolic activity; when one population had high activity, the associated partner microorganisms were also equally active—consistent with a beneficial symbiosis. However, a close look at the spatial organization of the cells revealed that no particular arrangement of the two types of organisms—whether evenly dispersed or in separate groups—was correlated with a cell's activity.

To determine how these metabolic interactions were taking place even over relatively long distances, postdoctoral scholar and coauthor Chris Kempes, a visitor in computing and mathematical sciences, modeled the predicted relationship between cellular activity and distance between syntrophic partners that are dependent on the molecular diffusion of a substrate. He found that conventional metabolites—molecules previously predicted to be involved in this syntrophic consumption of methane—such as hydrogen—were inconsistent with the spatial activity patterns observed in the data. However, revised models indicated that electrons could likely make the trip from cell to cell across greater distances.

"Chris came up with a generalized model for the methane-oxidizing syntrophy based on direct electron transfer, and these model results were a better match to our empirical data," Orphan says. "This pointed to the possibility that these archaea were directly transferring electrons derived from methane to the outside of the cell, and those electrons were being passed to the bacteria directly."

Guided by this information, Chadwick and McGlynn looked for independent evidence to support the possibility of direct interspecies electron transfer. Cultured bacteria, such as those from the genus Geobacter, are model organisms for the direct electron transfer process. These bacteria use large proteins, called multi-heme cytochromes, on their outer surface that act as conductive "wires" for the transport of electrons.

Using genome analysis—along with transmission electron microscopy and a stain that reacts with these multi-heme cytochromes—the researchers showed that these conductive proteins were also present on the outer surface of the archaea they were studying. And that finding, Orphan says, can explain why the spatial arrangement of the syntrophic partners does not seem to affect their relationship or activity.

"It's really one of the first examples of direct interspecies electron transfer occurring between uncultured microorganisms in the environment. Our hunch is that this is going to be more common than is currently recognized," she says.

Orphan notes that the information they have learned about this relationship will help to expand how researchers think about interspecies microbial interactions in nature. In addition, the microscale stable isotope approach used in the current study can be used to evaluate interspecies electron transport and other forms of microbial symbiosis occurring in the environment.

These results were published in a paper titled, "Single cell activity reveals direct electron transfer in methanotrophic consortia." The work was funded by the Department of Energy Division of Biological and Environmental Research and the Gordon and Betty Moore Foundation Marine Microbiology Initiative.

Exclude from News Hub: 
News Type: 
Research News
rpyle's picture

Advanced LIGO to Begin Operations

The Advanced LIGO begins operations this week, after 7 years of enhancement.

The Advanced LIGO Project, a major upgrade of the Laser Interferometer Gravitational-Wave Observatory, is completing its final preparations before the initiation of scientific observations, scheduled to begin in mid-September. Designed to observe gravitational waves—ripples in the fabric of space and time—LIGO, which was designed and is operated by Caltech and MIT with funding from the National Science Foundation (NSF), consists of identical detectors in Livingston, Louisiana, and Hanford, Washington.

"The LIGO scientific and engineering team at Caltech and MIT has been leading the effort over the past seven years to build Advanced LIGO, the world's most sensitive gravitational-wave detector," says David Reitze, the executive director of the LIGO program at Caltech. Groups from the international LIGO Scientific Collaboration also contributed to the design and construction of the Advanced LIGO detector.

Gravitational waves were predicted by Albert Einstein in 1916 as a consequence of his general theory of relativity, and are emitted by violent events in the universe such as exploding stars and colliding black holes. These waves carry information not only about the objects that produce them, but also about the nature of gravity in extreme conditions that cannot be obtained by other astronomical tools.

"Experimental attempts to find gravitational waves have been on going for over 50 years, and they haven't yet been found. They're both very rare and possess signal amplitudes that are exquisitely tiny," Reitze says.

Although earlier LIGO runs revealed no detections, Advanced LIGO, also funded by the NSF, increases the sensitivity of the observatories by a factor of 10, resulting in a thousandfold increase in observable candidate objects. "The first Advanced LIGO science run will take place with interferometers that can 'see' events more than three times further than the initial LIGO detector," adds David Shoemaker, the MIT Advanced LIGO project leader, "so we'll be probing a much larger volume of space."

Each of the 4-kilometer-long L-shaped LIGO interferometers uses a laser beam split into two beams that travel back and forth through the long arms, within tubes from which the air has been evacuated. The beams are used to monitor the distance between precisely configured mirrors. According to Einstein's theory, the relative distance between the mirrors will change very slightly when a gravitational wave passes by.

The original configuration of LIGO was sensitive enough to detect a change in the lengths of the 4-kilometer arms by a distance one-thousandth the diameter of a proton; this is like accurately measuring the distance from Earth to the nearest star—over four light-years—to within the width of a human hair. Advanced LIGO, which will utilize the infrastructure of LIGO, is much more powerful.

While earlier LIGO observing runs did not confirm the existence of gravitational waves, the influence of such waves has been measured indirectly via observations of a binary system called PSR B1913+6. The system consists of two objects, both neutron stars—the compact cores of dead stars—that orbit a common center of mass. The orbits of these two stellar bodies have been observed to be slowly contracting due to the energy that is lost to gravitational radiation. Binary star systems such as these that are in the very last stages of evolution—just before and during the inevitable collision of the two objects—are key targets of the planned observing schedule for Advanced LIGO.

"Ultimately, Advanced LIGO will be able to see 10 times as far as initial LIGO and, based on theoretical predictions, should detect many binary neutron star mergers per year," Reitze says.

The improved instruments will be able to look at the last minutes of the life of pairs of massive black holes as they spiral closer together, coalesce into one larger black hole, and then vibrate much like two soap bubbles becoming one. Advanced LIGO also will be able to pinpoint periodic signals from the many known pulsars that radiate in the range of 10 to 1,000 Hertz (frequencies that roughly correspond to low and high notes on an organ). In addition, Advanced LIGO will be used to search for the gravitational cosmic background, allowing tests of theories about the development of the universe only 10-35 seconds after the Big Bang.

"We expect it will take five years to fully optimize the detector performance and achieve our design sensitivity," Reitze says. "It has been a long road, and we're very excited to resume the hunt for gravitational waves."

Rod Pyle
Frontpage Title: 
Advanced LIGO to Begin Operations
Listing Title: 
Advanced LIGO to Begin Operations
Exclude from News Hub: 
Short Title: 
Advanced LIGO Begins Operation
News Type: 
Research News

Bar-Coding Technique Opens Up Studies Within Single Cells

All of the cells in a particular tissue sample are not necessarily the same—they can vary widely in terms of genetic content, composition, and function. Yet many studies and analytical techniques aimed at understanding how biological systems work at the cellular level treat all of the cells in a tissue sample as identical, averaging measurements over the entire cellular population. It is easy to see why this happens. With the cell's complex matrix of organelles, signaling chemicals, and genetic material—not to mention its miniscule scale—zooming in to differentiate what is happening within each individual cell is no trivial task.

"But being able to do single-cell analysis is crucial to understanding a lot of biological systems," says Long Cai, assistant professor of chemistry at Caltech. "This is true in brains, in biofilms, in embryos . . . you name it."

Now Cai's lab has developed a method for simultaneously imaging and identifying dozens of molecules within individual cells. This technique could offer new insight into how cells are organized and interact with each other and could eventually improve our understanding of many diseases.

The imaging technique that Cai and his colleagues have developed allows researchers not only to resolve a large number of molecules—such as messenger RNA species (mRNAs)—within a single cell, but also to systematically label each type of molecule with its own unique fluorescent "bar code" so it can be readily identified and measured without damaging the cell.

"Using this technique, there is essentially no limit on how many different types of molecules you can detect within a single cell," explains Cai.

The new method uses an innovative sequential bar-coding scheme that takes fluorescence in situ hybridization (FISH), a well-known procedure for detecting specific sequences of DNA or RNA in a sample, to the next level. Cai and his colleagues have dubbed their technique FISH Sequential Coding anALYSis (FISH SCALYS). 

FISH makes use of molecular probes—short fragments of DNA bound to fluorescent dyes, or fluorophores. These probes bind, or hybridize, to DNA or RNA with complementary sequences. When a hybridized sample is imaged with microscopy, the fluorophore lights up, pinpointing the target molecule's location.

There are a handful of fluorophores that can be used in these probes, and researchers typically use them to identify only a few different genes. For example, they will use a red dye to label all of the probes that target a specific type of mRNA. And when they image the sample, they will see a bunch of red dots in the cell. Then they will take another set of probes that target a different type of mRNA, label them with a blue fluorophore, and see glowing blue spots. And so on.

But what if a researcher wants to image more types of molecules than there are fluorophores? In the past, they have tried to mix the dyes together, making both red and blue probes for a particular gene, so that when both probes bind to the gene, the resulting dot would look purple. It was an imperfect solution and could still only label about 30 different types of molecules.

Cai's team realized that the same handful of fluorophores could be used in sequential rounds of hybridization to create thousands of unique fluorescent bar codes that could clearly identify many types of molecules (see graphic at right).

"With our technique, each tagged molecule remains just one single color in each round but we build up a bar code through multiple rounds, so the colors remain distinguishable. Using additional colors and extra rounds of hybridization, you can scale up easily to identify tens of thousands of different molecules," says Cai.

The number of bar codes available is potentially immense: FN, where F is the number of fluorophores and N is the number of rounds of hybridization. So with four dyes and eight rounds of hybridization, scientists would have more than enough bar codes (48=65,536) to cover all of the approximately 20,000 RNA molecules in a cell.

Cai says FISH SCALYS could be used to determine molecular identities of various types of cells, including embryonic stem cells. "One subset of genes will be turned on for one type of cell and off for another," he explains. It could also provide insight into the way that diseases alter cells, allowing researchers to compare the expression differences for a large number of genes in normal tissue versus diseased tissue.

Cai has recently been funded by the McKnight Endowment Fund for Neuroscience to adapt the technique to identify different types of neurons in samples from the hippocampus, a part of the brain associated with memory and learning.

Cai is also leading a program through Caltech's Beckman Institute that is helping other researchers on campus apply the imaging method to diverse biological questions.

Cai and his team describe the technique in a Nature Methods paper titled "Single-cell in situ RNA profiling by sequential hybridization." Caltech graduate student Eric Lubeck and postdoctoral scholar Ahmet Coskun are lead authors on the paper. Additional coauthors include Timur Zhiyentayev, a former Caltech graduate student, and Mubhij Ahmad, a former research technician in the Cai lab. The work has been funded by the National Institutes of Health's Single Cell Analysis Program.

Kimm Fesenmaier
Frontpage Title: 
Scaling Up Molecular Detection in Single Cells
Listing Title: 
Scaling Up Molecular Detection in Single Cells
Exclude from News Hub: 
Short Title: 
Scaling Up Molecular Detection in Single Cells
News Type: 
Research News

An Antibody That Can Attack HIV in New Ways

Proteins called broadly neutralizing antibodies (bNAbs) are a promising key to the prevention of infection by HIV, the virus that causes AIDS. bNAbs have been found in blood samples from some HIV patients whose immune systems can naturally control the infection. These antibodies may protect a patient's healthy cells by recognizing a protein called the envelope spike, present on the surface of all HIV strains and inhibiting, or neutralizing, the effects of the virus. Now Caltech researchers have discovered that one particular bNAb may be able to recognize this signature protein, even as it takes on different conformations during infection—making it easier to detect and neutralize the viruses in an infected patient.

The work, from the laboratory of Pamela Bjorkman, Centennial Professor of Biology, was published in the September 10 issue of the journal Cell.

The process of HIV infection begins when the virus comes into contact with human immune cells called T cells that carry a particular protein, CD4, on their surface. Three-part (or "trimer") proteins called envelope spikes on the surface of the virus recognize and bind to the CD4 proteins. The spikes can be in either a closed or an open conformation, going from closed to open when the spike binds to CD4. The open conformation then triggers fusion of the virus with the target cell, allowing the HIV virus to deposit its genetic material inside the host cell, forcing it to become a factory for making new viruses that can go on to infect other cells.

The bNAbs recognize the envelope spike on the surface of HIV, and most known bNAbs only recognize the spike in the closed conformation. Although the only target of neutralizing antibodies is the envelope spike, each bNAb actually functions by recognizing just one specific target, or epitope, on this protein. Some targets allow more effective neutralization of the virus, and, therefore, some bNAbs are more effective against HIV than others. In 2014, Bjorkman and her collaborators at Rockefeller University reported initial characterization of a potent bNAb called 8ANC195 in the blood of HIV patients whose immune systems could naturally control their infections. They also discovered that this antibody could neutralize the HIV virus by targeting a different epitope than any other previously identified bNAb.

In the work described in the recent Cell paper, they investigated how 8ANC195 functions—and how its unique properties could be beneficial for HIV therapies.

"In Pamela's lab we use X-ray crystallography and electron microscopy to study protein–protein interactions on a molecular level," says Louise Scharf, a postdoctoral scholar in Bjorkman's laboratory and the first author on the paper. "We previously were able to define the binding site of this antibody on a subunit of the HIV envelope spike, so in this study we solved the three-dimensional structure of this antibody in complex with the entire spike, and showed in detail exactly how the antibody recognizes the virus."

What they found was that although most bNAbs recognize the envelope spike in its closed conformation, 8ANC195 could recognize the viral protein in both the closed conformation and a partially open conformation. "We think it's actually an advantage if the antibody can recognize these different forms," Scharf says.

The most common form of HIV infection is when a virus in the bloodstream attaches to a T cell and infects the cell. In this instance, the spikes on the free-floating virus would be predominantly in the closed conformation until they made contact with the host cell. Most bNAbs could neutralize this virus. However, HIV also can spread directly from one cell to another. In this case, because the antibody already is attached to the host cell, the spike is in an open conformation. But 8ANC195 could still recognize and attach to it.

A potential medical application of this antibody is in so-called combination therapies, in which a patient is given a cocktail of several antibodies that work in different ways to fight off the virus as it rapidly changes and evolves. "Our collaborators at Rockefeller have studied this extensively in animal models, showing that if you administer a combination of these antibodies, you greatly reduce how much of the virus can escape and infect the host," Scharf says. "So 8ANC195 is one more antibody that we can use therapeutically; it targets a different epitope than other potent antibodies, and it has the advantage of being able to recognize these multiple conformations."

The idea of bNAb therapeutics might not be far from a clinical reality. Scharf says that the same collaborators at Rockefeller University are already testing bNAbs in a human treatment in a clinical trial. Although the initial trial will not include 8ANC195, the antibody may be included in a combination therapy trial in the near future, Scharf says.

Furthermore, the availability of complete information about how 8ANC195 binds to the viral spike will allow Scharf, Bjorkman, and their colleagues to begin engineering the antibody to be more potent and able to recognize more strains of HIV.

"In addition to supporting the use of 8ANC195 for therapeutic applications, our structural studies of 8ANC195 have revealed an unanticipated new conformation of the HIV envelope spike that is relevant to understanding the mechanism by which HIV enters host cells and bNAbs inhibit this process," Bjorkman says.

These results were published in a journal article titled "Broadly Neutralizing Antibody 8ANC195 Recognizes Closed and Open States of HIV-1 Env." In addition to Scharf and Bjorkman, other Caltech coauthors include graduate student Haoqing Wang, research technician Han Gao, research scientist Songye Chen, and Beckman Institute resource director Alasdair W. McDowall. Funding for the work was provided by the National Institute of Allergy and Infectious Diseases of the National Institutes of Health; the Bill and Melinda Gates Foundation; and the American Cancer Society. Crystallography and electron microscopy were done at the Molecular Observatory at Caltech, supported by the Gordon and Betty Moore Foundation.

Exclude from News Hub: 
News Type: 
Research News

Where to Land Mars 2020: A Conversation with Ken Farley

In August 2015, more than 150 scientists interested in the exploration of Mars attended a conference at a hotel in Arcadia, California, to evaluate 21 potential landing sites for NASA's next Mars rover, a mission called Mars 2020. The design of that mission will be based on that of the Mars Science Laboratory (MSL), including the sky-crane landing system that helped put the rover, Curiosity, safely on martian soil.

Over the course of three days, the scientists heard presentations about the proposed sites and voted on the scientific merit of the locations. In the end, they arrived at a prioritized list of sites that offer the best opportunity for the mission to meet its objectives—including the search for signs of ancient life on the Red Planet and collecting and storing (or "caching") scientifically interesting samples for possible return to Earth.

We recently spoke with Ken Farley, the mission's project scientist and the W.M. Keck Foundation Professor of Geochemistry at Caltech, to talk about the workshop and how the Mars 2020 landing site selection process is shaping up.


Can you tell us a little bit about how these workshops help the project select a landing site?

We are using the same basic site selection process that has been used for previous Mars rovers. It involves heavy engagement from the scientific community because there are individual experts on specific sites who are not necessarily on the mission's science team. 

We put out a call for proposals to suggest specific sites, and respondents presented at the workshop. We provided presenters with a one-page template on which to indicate the characteristics of their landing site—basic facts, like what minerals are present. This became a way to distill a presentation into something that you could evaluate objectively and relatively quickly. When people flashed these rubrics up at the end of their presentations, there was some interesting peer review going on in real time.

We went through all 21 sites, talking about what was at each location. In the end, we needed to boil down the input and get a sense of which sites the community was most interested in. So we used a scorecard that tied directly to the mission objectives; there were five criteria, and attendees were able indicate how well they felt each site met each requirement by voting either "low, " "medium, " or "high." Then we tallied up the votes.


You mentioned that the criteria on the scorecard were related to the objectives of the mission. What are those objectives?

We have four mission objectives. One is to prepare the way for human exploration of Mars. The rover will have a weather station and an instrument that converts atmospheric carbon dioxide into oxygen—it's called the in situ resource utilization (ISRU) payload. This is a way to make oxygen for both human consumption and, even more importantly, for propellant. In terms of the landing site process, this objective was not a driving factor because the ISRU and the weather station don't really care where they go.


And the other three objectives?

We call the three remaining objectives the "ABC" goals. A is to explore the landing site. That's a basic part of a geologic study—you look around and see what's there and try to understand the geologic processes that made it.

The B goal is to explore an "astrobiologically relevant environment," to look for rocks in habitable environments that have the ability to preserve biosignatures— evidence of past or present life—and then to look for biosignatures in those rocks. The phrase that NASA attaches to our mission is "Seeking the Signs of Life." We have a bunch of science instruments on the rover that will help us meet those objectives.

Then the C goal is to prepare a returnable cache of samples. The word "returnable" has a technical definition—the cache has to meet a bunch of criteria, and one is that it has to have enough scientific merit to return. Previous studies of what constitutes returnability have suggested we need a number of samples in the mid 30s—we use the number 37.


Why 37?

It may seem strange, but there is a reason for this strange number. Thirty-seven is the maximum number of samples that can be packed into a circular honeycomb inside one possible design of the sample return assembly.

The huge task for us is to be able to drill that many samples. We've learned from MSL that everything takes a long time. Driving takes a long time, drilling takes a long time. We have a very specific mandate that we have to be capable of collecting 20 samples in the prime mission. Collecting at least 20 samples will motivate what we do in designing the rover.

It also has motivated a lot of the discussion of landing sites. You've got to have targets you wish to drill that are close together, and they can't be a long drive from where you land. There also has to be diversity because you don't want 15 copies of the same sample.


After all of those factors were considered, what was the outcome of the voting?

What came out of it was an ordered list of eight sites. One interesting thing about that list was that the sites were divided roughly equally into two kinds—those that were crater lakes with deltas and those that we would broadly call hydrothermal sites. These are locations that the community believes are most likely to have ancient life in them and preserve the evidence of it.

It's easy to understand the deltas because if you look in the terrestrial environment, a delta is an excellent place to look for organic matter. The things that are living in the water above the delta and upstream are washed into the delta when they die. Then mud packs in on top and preserves that material.


What is interesting about hydrothermal systems?

A hydrothermal system is in some ways very appealing but in some ways risky. These are places where rocks are hot enough to heat water to extremely high temperatures. At hydrothermal vents on Earth's sea floor, you have these strange creatures that are essentially living off chemical energy from inside the planet. And, in fact, the oldest evidence for life on Earth may have been found in hydrothermal settings. The problem is these settings are precarious; when the water gets a little too hot, everything dies.


What is the heat source for the hydrothermal sites on Mars?

There are two important heat sources—one is impact and the other is volcanic. A whole collection of our top sites are in a region next to a giant impact crater, and when you look at those rocks, they have chemical and mineralogical characteristics that look like hydrothermal alteration.

A leading candidate of the volcanic type is a site in Gusev Crater called the Columbia Hills site, which the Spirit rover studied. The rover came across a silica deposit. At the time, scientists didn't really know what it was, but it is now thought that the silica is actually a product of volcanic activity called sinter. The presenter for the site showed pictures from Spirit of these little bits of sinter and then showed pictures of something that looks almost exactly the same from a geothermal field in Chile. It was a pretty compelling comparison. Then he went on to show that these environments on Earth are very conducive to life and that the little silica blobs preserve biosignatures well.

So although it would be an interesting decision to invest another mission in the same location, that site was favored because it's the only place where a mineral that might contain signs of ancient life is known to exist with certainty.


Do these two types of sites differ just in terms of their ancient environments?

No. It turns out that you can see most of the deltas from Mars's orbit because they are pretty much the last gasp of processing of the martian surface. They date to a period about 3.6 billion years ago when the planet transitioned from a warm, wet period to basically being desiccated. Some of the hydrothermal sites may have rocks that are in the 4-billion-year-old range. That age difference may not sound like much, but in terms of an evolving planet that is dying, it raises interesting questions. If you want to allow the maximum amount of time for life to have evolved, maybe you choose a delta site. On the other hand, you might say, "Mars is dying at that point," and you want to try to get samples that include a record from an earlier, more equable period.

Since the community is divided roughly evenly between these two types of sites, one of the important questions we will have to wrestle with until the next workshop (in early 2017) is, "Which of those kinds of sites is more promising?" We need to engage a bigger community to address this question.


What happened to the list generated from this workshop?

This workshop was almost exclusively about science. The mission's leadership and members of the Mars 2020 Landing Site Steering Committee, appointed by NASA, then took the information from the workshop, rolled it up with information that the project had generated on things like whether the sites could be landed on, and came up with a list of eight sites in alphabetic order:

  • Columbia Hills/Gusev
  • Eberswalde
  • Holden
  • Jezero
  • Mawrth Vallis
  • NE Syrtis Major
  • Nili Fossae
  • SW Melas Chasma

What comes next?

Over the course of the coming year, the Mars 2020 engineering team will continue its study of the feasibility of the highly ranked landing sites. At the same time, the science team will dig deeply into what is known about each site, seeking to identify the sites that are best suited to meet the mission's science goals. I expect that advocates for specific sites will also continue doing their homework to make the strongest possible case for their preferred site. And in 2017, we'll do the workshop all over again!

Frontpage Title: 
Where to Land Mars 2020
Listing Title: 
Where to Land Mars 2020
Exclude from News Hub: 
News Type: 
Research News

Seeing Quantum Motion

Consider the pendulum of a grandfather clock. If you forget to wind it, you will eventually find the pendulum at rest, unmoving. However, this simple observation is only valid at the level of classical physics—the laws and principles that appear to explain the physics of relatively large objects at human scale. However, quantum mechanics, the underlying physical rules that govern the fundamental behavior of matter and light at the atomic scale, state that nothing can quite be completely at rest.

For the first time, a team of Caltech researchers and collaborators has found a way to observe—and control—this quantum motion of an object that is large enough to see. Their results are published in the August 27 online issue of the journal Science.

Researchers have known for years that in classical physics, physical objects indeed can be motionless. Drop a ball into a bowl, and it will roll back and forth a few times. Eventually, however, this motion will be overcome by other forces (such as gravity and friction), and the ball will come to a stop at the bottom of the bowl.

"In the past couple of years, my group and a couple of other groups around the world have learned how to cool the motion of a small micrometer-scale object to produce this state at the bottom, or the quantum ground state," says Keith Schwab, a Caltech professor of applied physics, who led the study. "But we know that even at the quantum ground state, at zero-temperature, very small amplitude fluctuations—or noise—remain."

Because this quantum motion, or noise, is theoretically an intrinsic part of the motion of all objects, Schwab and his colleagues designed a device that would allow them to observe this noise and then manipulate it.

The micrometer-scale device consists of a flexible aluminum plate that sits atop a silicon substrate. The plate is coupled to a superconducting electrical circuit as the plate vibrates at a rate of 3.5 million times per second. According to the laws of classical mechanics, the vibrating structures eventually will come to a complete rest if cooled to the ground state.

But that is not what Schwab and his colleagues observed when they actually cooled the spring to the ground state in their experiments. Instead, the residual energy—quantum noise—remained.

"This energy is part of the quantum description of nature—you just can't get it out," says Schwab. "We all know quantum mechanics explains precisely why electrons behave weirdly. Here, we're applying quantum physics to something that is relatively big, a device that you can see under an optical microscope, and we're seeing the quantum effects in a trillion atoms instead of just one."

Because this noisy quantum motion is always present and cannot be removed, it places a fundamental limit on how precisely one can measure the position of an object.

But that limit, Schwab and his colleagues discovered, is not insurmountable. The researchers and collaborators developed a technique to manipulate the inherent quantum noise and found that it is possible to reduce it periodically. Coauthors Aashish Clerk from McGill University and Florian Marquardt from the Max Planck Institute for the Science of Light proposed a novel method to control the quantum noise, which was expected to reduce it periodically. This technique was then implemented on a micron-scale mechanical device in Schwab's low-temperature laboratory at Caltech.

"There are two main variables that describe the noise or movement," Schwab explains. "We showed that we can actually make the fluctuations of one of the variables smaller—at the expense of making the quantum fluctuations of the other variable larger. That is what's called a quantum squeezed state; we squeezed the noise down in one place, but because of the squeezing, the noise has to squirt out in other places. But as long as those more noisy places aren't where you're obtaining a measurement, it doesn't matter."

The ability to control quantum noise could one day be used to improve the precision of very sensitive measurements, such as those obtained by LIGO, the Laser Interferometry Gravitational-wave Observatory, a Caltech-and-MIT-led project searching for signs of gravitational waves, ripples in the fabric of space-time.

"We've been thinking a lot about using these methods to detect gravitational waves from pulsars—incredibly dense stars that are the mass of our sun compressed into a 10 km radius and spin at 10 to 100 times a second," Schwab says. "In the 1970s, Kip Thorne [Caltech's Richard P. Feynman Professor of Theoretical Physics, Emeritus] and others wrote papers saying that these pulsars should be emitting gravity waves that are nearly perfectly periodic, so we're thinking hard about how to use these techniques on a gram-scale object to reduce quantum noise in detectors, thus increasing the sensitivity to pick up on those gravity waves," Schwab says.

In order to do that, the current device would have to be scaled up. "Our work aims to detect quantum mechanics at bigger and bigger scales, and one day, our hope is that this will eventually start touching on something as big as gravitational waves," he says.

These results were published in an article titled, "Quantum squeezing of motion in a mechanical resonator." In addition to Schwab, Clerk, and Marquardt, other coauthors include former graduate student Emma E. Wollman (PhD '15); graduate students Chan U. Lei and Ari J. Weinstein; former postdoctoral scholar Junho Suh; and Andreas Kronwald of Friedrich-Alexander-Universität in Erlangen, Germany. The work was funded by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency, and the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center that also has support from the Gordon and Betty Moore Foundation.

Exclude from News Hub: 
News Type: 
Research News

Why Did Western Europe Dominate the Globe?

Although Europe represents only about 8 percent of the planet's landmass, from 1492 to 1914, Europeans conquered or colonized more than 80 percent of the entire world. Being dominated for centuries has led to lingering inequality and long-lasting effects in many formerly colonized countries, including poverty and slow economic growth. There are many possible explanations for why history played out this way, but few can explain why the West was so powerful for so long.

Caltech's Philip Hoffman, the Rea A. and Lela G. Axline Professor of Business Economics and professor of history, has a new explanation: the advancement of gunpowder technology. The Chinese invented gunpowder, but Hoffman, whose work applies economic theory to historical contexts, argues that certain political and economic circumstances allowed the Europeans to advance gunpowder technology at an unprecedented rate—allowing a relatively small number of people to quickly take over much of the rest of the globe.

Hoffman's work is published in a new book titled Why Did Europe Conquer the World? We spoke with him recently about his research interests and what led him to study this particular topic.

You have been on the Caltech faculty for more than 30 years. Are there any overarching themes to your work?

Over the years I've been interested in a number of different things, and this new work puts together a lot of bits of my research. I've looked at changes in technology that influence agriculture, and I've studied the development of financial markets, and in between those two, I was also studying why financial crises occur. I've also been interested in the development of tax systems. For example, how did states get the ability to impose heavy taxes? What were the politics and the political context of the economy that resulted in this ability to tax?

What led you to investigate the global conquests of western Europe?

It's just fascinating. In 1914, really only China, Japan, and the Ottoman Empire had escaped becoming European colonies. A thousand years ago, no one would have ever expected that result, for at that point western Europe was hopelessly backward. It was politically weak, it was poor, and the major long-distance commerce was a slave trade led by Vikings. The political dominance of western Europe was an unexpected outcome and had really big consequences, so I thought: let's explain it.


Many theories purport to explain how the West became dominant. For example, that Europe became industrialized more quickly and therefore became wealthier than the rest of the world. Or, that when Europeans began to travel the world, people in other countries did not have the immunity to fight off the diseases they brought with them. How is your theory different?

Yes, there are lots of conventional explanations—industrialization, for example—but on closer inspection they all fall apart. Before 1800, Europe had already taken over at least 35 percent of the world, but Britain was just beginning to industrialize. The rest of Europe at that time was really no wealthier than China, the Middle East, or South Asia. So as an explanation, industrialization doesn't work.

Another explanation, described in Jared Diamond's famous book [Guns, Germs, and Steel: The Fates of Human Societies], is disease. But something like the smallpox epidemic that ravaged Mexico when the Spanish conquistador Hernán Cortés overthrew the Aztec Empire just isn't the whole story of Cortés's victory or of Europe's successful colonization of other parts of the world. Disease can't explain, for example, the colonization of India, because people in southeast Asia had the same immunity to disease that the Europeans did. So that's not the answer—it's something else.


What made you turn to the idea of gunpowder technology as an explanation?

It started after I gave an undergraduate here a book to read about gunpowder technology, how it was invented in China and used in Japan and Southeast Asia, and how the Europeans got very good at using it, which fed into their successful conquests. I'd given it to him because the use of this technology is related to politics and fiscal systems and taxes, and as he was reading it, he noted that the book did not give the ultimate cause of why Europe in particular was so successful. That was a really great question and it got me interested.


What was so special about gunpowder?

Gunpowder was really important for conquering territory; it allows a small number of people to exercise a lot of influence. The technology grew to include more than just guns: armed ships, fortifications that can resist artillery, and more, and the Europeans became the best at using these things.

So, I put together an economic model of how this technology has advanced to come up with what I think is the real reason why the West conquered almost everyone else. My idea incorporates the model of a contest or a tournament where your odds of winning are higher if you spend more resources on fighting. You can think of that as being much like a baseball team that hires better players to win more games, but in this case, instead of coaches, it's political leaders and instead of games there are wars. And the more that the political leaders spend, the better their chances of defeating other leaders and, in the long run, of dominating the other cultures.


What kinds of factors are included in this model?

One big factor that's important to the advancement of any defense technology is how much money a political leader can spend. That comes down to the political costs of raising revenue and a leader's ability to tax. In the very successful countries, the leaders could impose very heavy taxes and spend huge sums on war.

The economic model then connected that spending to changes in military technology. The spending on war gave leaders a chance to try out new weapons, new armed ships, and new tactics, and to learn from mistakes on the battlefield. The more they spent, the more chances they had to improve their military technology through trial and error while fighting wars. So more spending would not only mean greater odds of victory over an enemy, but more rapid change in military technology.

If you think about it, you realize that advancements in gunpowder technology—which are important for conquest—arise where political leaders fight using that technology, where they spend huge sums on it, and where they're able to share the resulting advances in that technology. For example, if I am fighting you and you figure out a better way to build an armed ship, I can imitate you. For that to happen, the countries have to be small and close to one another. And all of this describes Europe.


What does this mean in a modern context?

One lesson the book teaches is that actions involving war, foreign policy, and military spending can have big, long-lasting consequences: this is a lesson that policy makers should never forget. The book also reminds us that in a world where there are hostile powers, we really don't want to get rid of spending on improving military technology. Those improvements can help at times when wars are necessary—for instance, when we are fighting against enemies with whom we cannot negotiate. Such enemies existed in the past—they were fighting for glory on the battlefield or victory over an enemy of the faith—and one could argue that they pose a threat today as well.

Things are much better if the conflict concerns something that can be split up—such as money or land. Then you can bargain with your enemies to divvy up whatever you disagree about and you can have something like peace. You'll still need to back up the peace with armed forces, but you won't actually fight all that much, and that's a much better outcome.

In either case, you'll still be spending money on the military and on military research. Personally, I would much rather see expenditures devoted to infrastructure, or scientific research, or free preschool for everybody—things that would carry big economic benefits—but in this world, I don't think you can stop doing military research or spending money on the military. I wish we did live in that world, but unfortunately it's not realistic.

Exclude from News Hub: 
News Type: 
Research News

Artificial Leaf Harnesses Sunlight for Efficient Fuel Production

Generating and storing renewable energy, such as solar or wind power, is a key barrier to a clean-energy economy. When the Joint Center for Artificial Photosynthesis (JCAP) was established at Caltech and its partnering institutions in 2010, the U.S. Department of Energy (DOE) Energy Innovation Hub had one main goal: a cost-effective method of producing fuels using only sunlight, water, and carbon dioxide, mimicking the natural process of photosynthesis in plants and storing energy in the form of chemical fuels for use on demand. Over the past five years, researchers at JCAP have made major advances toward this goal, and they now report the development of the first complete, efficient, safe, integrated solar-driven system for splitting water to create hydrogen fuels.

"This result was a stretch project milestone for the entire five years of JCAP as a whole, and not only have we achieved this goal, we also achieved it on time and on budget," says Caltech's Nate Lewis, George L. Argyros Professor and professor of chemistry, and the JCAP scientific director.

The new solar fuel generation system, or artificial leaf, is described in the August 27 online issue of the journal Energy and Environmental Science. The work was done by researchers in the laboratories of Lewis and Harry Atwater, director of JCAP and Howard Hughes Professor of Applied Physics and Materials Science.

"This accomplishment drew on the knowledge, insights and capabilities of JCAP, which illustrates what can be achieved in a Hub-scale effort by an integrated team," Atwater says. "The device reported here grew out of a multi-year, large-scale effort to define the design and materials components needed for an integrated solar fuels generator."

Solar Fuels Prototype in Operation
A fully integrated photoelectrochemical device performing unassisted solar water splitting for the production of hydrogen fuel. Credit: Erik Verlage and Chengxiang Xiang/Caltech

The new system consists of three main components: two electrodes—one photoanode and one photocathode—and a membrane. The photoanode uses sunlight to oxidize water molecules, generating protons and electrons as well as oxygen gas. The photocathode recombines the protons and electrons to form hydrogen gas. A key part of the JCAP design is the plastic membrane, which keeps the oxygen and hydrogen gases separate. If the two gases are allowed to mix and are accidentally ignited, an explosion can occur; the membrane lets the hydrogen fuel be separately collected under pressure and safely pushed into a pipeline.

Semiconductors such as silicon or gallium arsenide absorb light efficiently and are therefore used in solar panels. However, these materials also oxidize (or rust) on the surface when exposed to water, so cannot be used to directly generate fuel. A major advance that allowed the integrated system to be developed was previous work in Lewis's laboratory, which showed that adding a nanometers-thick layer of titanium dioxide (TiO2)—a material found in white paint and many toothpastes and sunscreens—onto the electrodes could prevent them from corroding while still allowing light and electrons to pass through. The new complete solar fuel generation system developed by Lewis and colleagues uses such a 62.5-nanometer-thick TiO2 layer to effectively prevent corrosion and improve the stability of a gallium arsenide–based photoelectrode.

Another key advance is the use of active, inexpensive catalysts for fuel production. The photoanode requires a catalyst to drive the essential water-splitting reaction. Rare and expensive metals such as platinum can serve as effective catalysts, but in its work the team discovered that it could create a much cheaper, active catalyst by adding a 2-nanometer-thick layer of nickel to the surface of the TiO2. This catalyst is among the most active known catalysts for splitting water molecules into oxygen, protons, and electrons and is a key to the high efficiency displayed by the device.

The photoanode was grown onto a photocathode, which also contains a highly active, inexpensive, nickel-molybdenum catalyst, to create a fully integrated single material that serves as a complete solar-driven water-splitting system.

A critical component that contributes to the efficiency and safety of the new system is the special plastic membrane that separates the gases and prevents the possibility of an explosion, while still allowing the ions to flow seamlessly to complete the electrical circuit in the cell. All of the components are stable under the same conditions and work together to produce a high-performance, fully integrated system. The demonstration system is approximately one square centimeter in area, converts 10 percent of the energy in sunlight into stored energy in the chemical fuel, and can operate for more than 40 hours continuously.

"This new system shatters all of the combined safety, performance, and stability records for artificial leaf technology by factors of 5 to 10 or more ," Lewis says.

"Our work shows that it is indeed possible to produce fuels from sunlight safely and efficiently in an integrated system with inexpensive components," Lewis adds, "Of course, we still have work to do to extend the lifetime of the system and to develop methods for cost-effectively manufacturing full systems, both of which are in progress."

Because the work assembled various components that were developed by multiple teams within JCAP, coauthor Chengxiang Xiang, who is co-leader of the JCAP prototyping and scale-up project, says that the successful end result was a collaborative effort. "JCAP's research and development in device design, simulation, and materials discovery and integration all funneled into the demonstration of this new device," Xiang says.

These results are published in a paper titled "A monolithically integrated, intrinsically safe, 10% efficient, solar-driven water-splitting system based on active, stable earth-abundant electrocatalysts in conjunction with tandem III-V light absorbers protected by amorphous TiO2 films." In addition to Lewis, Atwater, and Xiang, other Caltech coauthors include graduate student Erik Verlage, postdoctoral scholars Shu Hu and Ke Sun, material processing and integration research engineer Rui Liu, and JCAP mechanical engineer Ryan Jones. Funding was provided by the Office of Science at the U.S. Department of Energy, and the Gordon and Betty Moore Foundation.

Exclude from News Hub: 
News Type: 
Research News


Subscribe to RSS - research_news