An Antibody That Can Attack HIV in New Ways

Proteins called broadly neutralizing antibodies (bNAbs) are a promising key to the prevention of infection by HIV, the virus that causes AIDS. bNAbs have been found in blood samples from some HIV patients whose immune systems can naturally control the infection. These antibodies may protect a patient's healthy cells by recognizing a protein called the envelope spike, present on the surface of all HIV strains and inhibiting, or neutralizing, the effects of the virus. Now Caltech researchers have discovered that one particular bNAb may be able to recognize this signature protein, even as it takes on different conformations during infection—making it easier to detect and neutralize the viruses in an infected patient.

The work, from the laboratory of Pamela Bjorkman, Centennial Professor of Biology, was published in the September 10 issue of the journal Cell.

The process of HIV infection begins when the virus comes into contact with human immune cells called T cells that carry a particular protein, CD4, on their surface. Three-part (or "trimer") proteins called envelope spikes on the surface of the virus recognize and bind to the CD4 proteins. The spikes can be in either a closed or an open conformation, going from closed to open when the spike binds to CD4. The open conformation then triggers fusion of the virus with the target cell, allowing the HIV virus to deposit its genetic material inside the host cell, forcing it to become a factory for making new viruses that can go on to infect other cells.

The bNAbs recognize the envelope spike on the surface of HIV, and most known bNAbs only recognize the spike in the closed conformation. Although the only target of neutralizing antibodies is the envelope spike, each bNAb actually functions by recognizing just one specific target, or epitope, on this protein. Some targets allow more effective neutralization of the virus, and, therefore, some bNAbs are more effective against HIV than others. In 2014, Bjorkman and her collaborators at Rockefeller University reported initial characterization of a potent bNAb called 8ANC195 in the blood of HIV patients whose immune systems could naturally control their infections. They also discovered that this antibody could neutralize the HIV virus by targeting a different epitope than any other previously identified bNAb.

In the work described in the recent Cell paper, they investigated how 8ANC195 functions—and how its unique properties could be beneficial for HIV therapies.

"In Pamela's lab we use X-ray crystallography and electron microscopy to study protein–protein interactions on a molecular level," says Louise Scharf, a postdoctoral scholar in Bjorkman's laboratory and the first author on the paper. "We previously were able to define the binding site of this antibody on a subunit of the HIV envelope spike, so in this study we solved the three-dimensional structure of this antibody in complex with the entire spike, and showed in detail exactly how the antibody recognizes the virus."

What they found was that although most bNAbs recognize the envelope spike in its closed conformation, 8ANC195 could recognize the viral protein in both the closed conformation and a partially open conformation. "We think it's actually an advantage if the antibody can recognize these different forms," Scharf says.

The most common form of HIV infection is when a virus in the bloodstream attaches to a T cell and infects the cell. In this instance, the spikes on the free-floating virus would be predominantly in the closed conformation until they made contact with the host cell. Most bNAbs could neutralize this virus. However, HIV also can spread directly from one cell to another. In this case, because the antibody already is attached to the host cell, the spike is in an open conformation. But 8ANC195 could still recognize and attach to it.

A potential medical application of this antibody is in so-called combination therapies, in which a patient is given a cocktail of several antibodies that work in different ways to fight off the virus as it rapidly changes and evolves. "Our collaborators at Rockefeller have studied this extensively in animal models, showing that if you administer a combination of these antibodies, you greatly reduce how much of the virus can escape and infect the host," Scharf says. "So 8ANC195 is one more antibody that we can use therapeutically; it targets a different epitope than other potent antibodies, and it has the advantage of being able to recognize these multiple conformations."

The idea of bNAb therapeutics might not be far from a clinical reality. Scharf says that the same collaborators at Rockefeller University are already testing bNAbs in a human treatment in a clinical trial. Although the initial trial will not include 8ANC195, the antibody may be included in a combination therapy trial in the near future, Scharf says.

Furthermore, the availability of complete information about how 8ANC195 binds to the viral spike will allow Scharf, Bjorkman, and their colleagues to begin engineering the antibody to be more potent and able to recognize more strains of HIV.

"In addition to supporting the use of 8ANC195 for therapeutic applications, our structural studies of 8ANC195 have revealed an unanticipated new conformation of the HIV envelope spike that is relevant to understanding the mechanism by which HIV enters host cells and bNAbs inhibit this process," Bjorkman says.

These results were published in a journal article titled "Broadly Neutralizing Antibody 8ANC195 Recognizes Closed and Open States of HIV-1 Env." In addition to Scharf and Bjorkman, other Caltech coauthors include graduate student Haoqing Wang, research technician Han Gao, research scientist Songye Chen, and Beckman Institute resource director Alasdair W. McDowall. Funding for the work was provided by the National Institute of Allergy and Infectious Diseases of the National Institutes of Health; the Bill and Melinda Gates Foundation; and the American Cancer Society. Crystallography and electron microscopy were done at the Molecular Observatory at Caltech, supported by the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Where to Land Mars 2020: A Conversation with Ken Farley

In August 2015, more than 150 scientists interested in the exploration of Mars attended a conference at a hotel in Arcadia, California, to evaluate 21 potential landing sites for NASA's next Mars rover, a mission called Mars 2020. The design of that mission will be based on that of the Mars Science Laboratory (MSL), including the sky-crane landing system that helped put the rover, Curiosity, safely on martian soil.

Over the course of three days, the scientists heard presentations about the proposed sites and voted on the scientific merit of the locations. In the end, they arrived at a prioritized list of sites that offer the best opportunity for the mission to meet its objectives—including the search for signs of ancient life on the Red Planet and collecting and storing (or "caching") scientifically interesting samples for possible return to Earth.

We recently spoke with Ken Farley, the mission's project scientist and the W.M. Keck Foundation Professor of Geochemistry at Caltech, to talk about the workshop and how the Mars 2020 landing site selection process is shaping up.

 

Can you tell us a little bit about how these workshops help the project select a landing site?

We are using the same basic site selection process that has been used for previous Mars rovers. It involves heavy engagement from the scientific community because there are individual experts on specific sites who are not necessarily on the mission's science team. 

We put out a call for proposals to suggest specific sites, and respondents presented at the workshop. We provided presenters with a one-page template on which to indicate the characteristics of their landing site—basic facts, like what minerals are present. This became a way to distill a presentation into something that you could evaluate objectively and relatively quickly. When people flashed these rubrics up at the end of their presentations, there was some interesting peer review going on in real time.

We went through all 21 sites, talking about what was at each location. In the end, we needed to boil down the input and get a sense of which sites the community was most interested in. So we used a scorecard that tied directly to the mission objectives; there were five criteria, and attendees were able indicate how well they felt each site met each requirement by voting either "low, " "medium, " or "high." Then we tallied up the votes.

 

You mentioned that the criteria on the scorecard were related to the objectives of the mission. What are those objectives?

We have four mission objectives. One is to prepare the way for human exploration of Mars. The rover will have a weather station and an instrument that converts atmospheric carbon dioxide into oxygen—it's called the in situ resource utilization (ISRU) payload. This is a way to make oxygen for both human consumption and, even more importantly, for propellant. In terms of the landing site process, this objective was not a driving factor because the ISRU and the weather station don't really care where they go.

 

And the other three objectives?

We call the three remaining objectives the "ABC" goals. A is to explore the landing site. That's a basic part of a geologic study—you look around and see what's there and try to understand the geologic processes that made it.

The B goal is to explore an "astrobiologically relevant environment," to look for rocks in habitable environments that have the ability to preserve biosignatures— evidence of past or present life—and then to look for biosignatures in those rocks. The phrase that NASA attaches to our mission is "Seeking the Signs of Life." We have a bunch of science instruments on the rover that will help us meet those objectives.

Then the C goal is to prepare a returnable cache of samples. The word "returnable" has a technical definition—the cache has to meet a bunch of criteria, and one is that it has to have enough scientific merit to return. Previous studies of what constitutes returnability have suggested we need a number of samples in the mid 30s—we use the number 37.

 

Why 37?

It may seem strange, but there is a reason for this strange number. Thirty-seven is the maximum number of samples that can be packed into a circular honeycomb inside one possible design of the sample return assembly.

The huge task for us is to be able to drill that many samples. We've learned from MSL that everything takes a long time. Driving takes a long time, drilling takes a long time. We have a very specific mandate that we have to be capable of collecting 20 samples in the prime mission. Collecting at least 20 samples will motivate what we do in designing the rover.

It also has motivated a lot of the discussion of landing sites. You've got to have targets you wish to drill that are close together, and they can't be a long drive from where you land. There also has to be diversity because you don't want 15 copies of the same sample.

 

After all of those factors were considered, what was the outcome of the voting?

What came out of it was an ordered list of eight sites. One interesting thing about that list was that the sites were divided roughly equally into two kinds—those that were crater lakes with deltas and those that we would broadly call hydrothermal sites. These are locations that the community believes are most likely to have ancient life in them and preserve the evidence of it.

It's easy to understand the deltas because if you look in the terrestrial environment, a delta is an excellent place to look for organic matter. The things that are living in the water above the delta and upstream are washed into the delta when they die. Then mud packs in on top and preserves that material.

 

What is interesting about hydrothermal systems?

A hydrothermal system is in some ways very appealing but in some ways risky. These are places where rocks are hot enough to heat water to extremely high temperatures. At hydrothermal vents on Earth's sea floor, you have these strange creatures that are essentially living off chemical energy from inside the planet. And, in fact, the oldest evidence for life on Earth may have been found in hydrothermal settings. The problem is these settings are precarious; when the water gets a little too hot, everything dies.

 

What is the heat source for the hydrothermal sites on Mars?

There are two important heat sources—one is impact and the other is volcanic. A whole collection of our top sites are in a region next to a giant impact crater, and when you look at those rocks, they have chemical and mineralogical characteristics that look like hydrothermal alteration.

A leading candidate of the volcanic type is a site in Gusev Crater called the Columbia Hills site, which the Spirit rover studied. The rover came across a silica deposit. At the time, scientists didn't really know what it was, but it is now thought that the silica is actually a product of volcanic activity called sinter. The presenter for the site showed pictures from Spirit of these little bits of sinter and then showed pictures of something that looks almost exactly the same from a geothermal field in Chile. It was a pretty compelling comparison. Then he went on to show that these environments on Earth are very conducive to life and that the little silica blobs preserve biosignatures well.

So although it would be an interesting decision to invest another mission in the same location, that site was favored because it's the only place where a mineral that might contain signs of ancient life is known to exist with certainty.

 

Do these two types of sites differ just in terms of their ancient environments?

No. It turns out that you can see most of the deltas from Mars's orbit because they are pretty much the last gasp of processing of the martian surface. They date to a period about 3.6 billion years ago when the planet transitioned from a warm, wet period to basically being desiccated. Some of the hydrothermal sites may have rocks that are in the 4-billion-year-old range. That age difference may not sound like much, but in terms of an evolving planet that is dying, it raises interesting questions. If you want to allow the maximum amount of time for life to have evolved, maybe you choose a delta site. On the other hand, you might say, "Mars is dying at that point," and you want to try to get samples that include a record from an earlier, more equable period.

Since the community is divided roughly evenly between these two types of sites, one of the important questions we will have to wrestle with until the next workshop (in early 2017) is, "Which of those kinds of sites is more promising?" We need to engage a bigger community to address this question.

 

What happened to the list generated from this workshop?

This workshop was almost exclusively about science. The mission's leadership and members of the Mars 2020 Landing Site Steering Committee, appointed by NASA, then took the information from the workshop, rolled it up with information that the project had generated on things like whether the sites could be landed on, and came up with a list of eight sites in alphabetic order:

  • Columbia Hills/Gusev
  • Eberswalde
  • Holden
  • Jezero
  • Mawrth Vallis
  • NE Syrtis Major
  • Nili Fossae
  • SW Melas Chasma
     

What comes next?

Over the course of the coming year, the Mars 2020 engineering team will continue its study of the feasibility of the highly ranked landing sites. At the same time, the science team will dig deeply into what is known about each site, seeking to identify the sites that are best suited to meet the mission's science goals. I expect that advocates for specific sites will also continue doing their homework to make the strongest possible case for their preferred site. And in 2017, we'll do the workshop all over again!

Home Page Title: 
Where to Land Mars 2020
Listing Title: 
Where to Land Mars 2020
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Seeing Quantum Motion

Consider the pendulum of a grandfather clock. If you forget to wind it, you will eventually find the pendulum at rest, unmoving. However, this simple observation is only valid at the level of classical physics—the laws and principles that appear to explain the physics of relatively large objects at human scale. However, quantum mechanics, the underlying physical rules that govern the fundamental behavior of matter and light at the atomic scale, state that nothing can quite be completely at rest.

For the first time, a team of Caltech researchers and collaborators has found a way to observe—and control—this quantum motion of an object that is large enough to see. Their results are published in the August 27 online issue of the journal Science.

Researchers have known for years that in classical physics, physical objects indeed can be motionless. Drop a ball into a bowl, and it will roll back and forth a few times. Eventually, however, this motion will be overcome by other forces (such as gravity and friction), and the ball will come to a stop at the bottom of the bowl.

"In the past couple of years, my group and a couple of other groups around the world have learned how to cool the motion of a small micrometer-scale object to produce this state at the bottom, or the quantum ground state," says Keith Schwab, a Caltech professor of applied physics, who led the study. "But we know that even at the quantum ground state, at zero-temperature, very small amplitude fluctuations—or noise—remain."

Because this quantum motion, or noise, is theoretically an intrinsic part of the motion of all objects, Schwab and his colleagues designed a device that would allow them to observe this noise and then manipulate it.

The micrometer-scale device consists of a flexible aluminum plate that sits atop a silicon substrate. The plate is coupled to a superconducting electrical circuit as the plate vibrates at a rate of 3.5 million times per second. According to the laws of classical mechanics, the vibrating structures eventually will come to a complete rest if cooled to the ground state.

But that is not what Schwab and his colleagues observed when they actually cooled the spring to the ground state in their experiments. Instead, the residual energy—quantum noise—remained.

"This energy is part of the quantum description of nature—you just can't get it out," says Schwab. "We all know quantum mechanics explains precisely why electrons behave weirdly. Here, we're applying quantum physics to something that is relatively big, a device that you can see under an optical microscope, and we're seeing the quantum effects in a trillion atoms instead of just one."

Because this noisy quantum motion is always present and cannot be removed, it places a fundamental limit on how precisely one can measure the position of an object.

But that limit, Schwab and his colleagues discovered, is not insurmountable. The researchers and collaborators developed a technique to manipulate the inherent quantum noise and found that it is possible to reduce it periodically. Coauthors Aashish Clerk from McGill University and Florian Marquardt from the Max Planck Institute for the Science of Light proposed a novel method to control the quantum noise, which was expected to reduce it periodically. This technique was then implemented on a micron-scale mechanical device in Schwab's low-temperature laboratory at Caltech.

"There are two main variables that describe the noise or movement," Schwab explains. "We showed that we can actually make the fluctuations of one of the variables smaller—at the expense of making the quantum fluctuations of the other variable larger. That is what's called a quantum squeezed state; we squeezed the noise down in one place, but because of the squeezing, the noise has to squirt out in other places. But as long as those more noisy places aren't where you're obtaining a measurement, it doesn't matter."

The ability to control quantum noise could one day be used to improve the precision of very sensitive measurements, such as those obtained by LIGO, the Laser Interferometry Gravitational-wave Observatory, a Caltech-and-MIT-led project searching for signs of gravitational waves, ripples in the fabric of space-time.

"We've been thinking a lot about using these methods to detect gravitational waves from pulsars—incredibly dense stars that are the mass of our sun compressed into a 10 km radius and spin at 10 to 100 times a second," Schwab says. "In the 1970s, Kip Thorne [Caltech's Richard P. Feynman Professor of Theoretical Physics, Emeritus] and others wrote papers saying that these pulsars should be emitting gravity waves that are nearly perfectly periodic, so we're thinking hard about how to use these techniques on a gram-scale object to reduce quantum noise in detectors, thus increasing the sensitivity to pick up on those gravity waves," Schwab says.

In order to do that, the current device would have to be scaled up. "Our work aims to detect quantum mechanics at bigger and bigger scales, and one day, our hope is that this will eventually start touching on something as big as gravitational waves," he says.

These results were published in an article titled, "Quantum squeezing of motion in a mechanical resonator." In addition to Schwab, Clerk, and Marquardt, other coauthors include former graduate student Emma E. Wollman (PhD '15); graduate students Chan U. Lei and Ari J. Weinstein; former postdoctoral scholar Junho Suh; and Andreas Kronwald of Friedrich-Alexander-Universität in Erlangen, Germany. The work was funded by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency, and the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center that also has support from the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Why Did Western Europe Dominate the Globe?

Although Europe represents only about 8 percent of the planet's landmass, from 1492 to 1914, Europeans conquered or colonized more than 80 percent of the entire world. Being dominated for centuries has led to lingering inequality and long-lasting effects in many formerly colonized countries, including poverty and slow economic growth. There are many possible explanations for why history played out this way, but few can explain why the West was so powerful for so long.

Caltech's Philip Hoffman, the Rea A. and Lela G. Axline Professor of Business Economics and professor of history, has a new explanation: the advancement of gunpowder technology. The Chinese invented gunpowder, but Hoffman, whose work applies economic theory to historical contexts, argues that certain political and economic circumstances allowed the Europeans to advance gunpowder technology at an unprecedented rate—allowing a relatively small number of people to quickly take over much of the rest of the globe.

Hoffman's work is published in a new book titled Why Did Europe Conquer the World? We spoke with him recently about his research interests and what led him to study this particular topic.
 

You have been on the Caltech faculty for more than 30 years. Are there any overarching themes to your work?

Over the years I've been interested in a number of different things, and this new work puts together a lot of bits of my research. I've looked at changes in technology that influence agriculture, and I've studied the development of financial markets, and in between those two, I was also studying why financial crises occur. I've also been interested in the development of tax systems. For example, how did states get the ability to impose heavy taxes? What were the politics and the political context of the economy that resulted in this ability to tax?
 

What led you to investigate the global conquests of western Europe?

It's just fascinating. In 1914, really only China, Japan, and the Ottoman Empire had escaped becoming European colonies. A thousand years ago, no one would have ever expected that result, for at that point western Europe was hopelessly backward. It was politically weak, it was poor, and the major long-distance commerce was a slave trade led by Vikings. The political dominance of western Europe was an unexpected outcome and had really big consequences, so I thought: let's explain it.

 

Many theories purport to explain how the West became dominant. For example, that Europe became industrialized more quickly and therefore became wealthier than the rest of the world. Or, that when Europeans began to travel the world, people in other countries did not have the immunity to fight off the diseases they brought with them. How is your theory different?

Yes, there are lots of conventional explanations—industrialization, for example—but on closer inspection they all fall apart. Before 1800, Europe had already taken over at least 35 percent of the world, but Britain was just beginning to industrialize. The rest of Europe at that time was really no wealthier than China, the Middle East, or South Asia. So as an explanation, industrialization doesn't work.

Another explanation, described in Jared Diamond's famous book [Guns, Germs, and Steel: The Fates of Human Societies], is disease. But something like the smallpox epidemic that ravaged Mexico when the Spanish conquistador Hernán Cortés overthrew the Aztec Empire just isn't the whole story of Cortés's victory or of Europe's successful colonization of other parts of the world. Disease can't explain, for example, the colonization of India, because people in southeast Asia had the same immunity to disease that the Europeans did. So that's not the answer—it's something else.

 

What made you turn to the idea of gunpowder technology as an explanation?

It started after I gave an undergraduate here a book to read about gunpowder technology, how it was invented in China and used in Japan and Southeast Asia, and how the Europeans got very good at using it, which fed into their successful conquests. I'd given it to him because the use of this technology is related to politics and fiscal systems and taxes, and as he was reading it, he noted that the book did not give the ultimate cause of why Europe in particular was so successful. That was a really great question and it got me interested.

 

What was so special about gunpowder?

Gunpowder was really important for conquering territory; it allows a small number of people to exercise a lot of influence. The technology grew to include more than just guns: armed ships, fortifications that can resist artillery, and more, and the Europeans became the best at using these things.

So, I put together an economic model of how this technology has advanced to come up with what I think is the real reason why the West conquered almost everyone else. My idea incorporates the model of a contest or a tournament where your odds of winning are higher if you spend more resources on fighting. You can think of that as being much like a baseball team that hires better players to win more games, but in this case, instead of coaches, it's political leaders and instead of games there are wars. And the more that the political leaders spend, the better their chances of defeating other leaders and, in the long run, of dominating the other cultures.

 

What kinds of factors are included in this model?

One big factor that's important to the advancement of any defense technology is how much money a political leader can spend. That comes down to the political costs of raising revenue and a leader's ability to tax. In the very successful countries, the leaders could impose very heavy taxes and spend huge sums on war.

The economic model then connected that spending to changes in military technology. The spending on war gave leaders a chance to try out new weapons, new armed ships, and new tactics, and to learn from mistakes on the battlefield. The more they spent, the more chances they had to improve their military technology through trial and error while fighting wars. So more spending would not only mean greater odds of victory over an enemy, but more rapid change in military technology.

If you think about it, you realize that advancements in gunpowder technology—which are important for conquest—arise where political leaders fight using that technology, where they spend huge sums on it, and where they're able to share the resulting advances in that technology. For example, if I am fighting you and you figure out a better way to build an armed ship, I can imitate you. For that to happen, the countries have to be small and close to one another. And all of this describes Europe.

 

What does this mean in a modern context?

One lesson the book teaches is that actions involving war, foreign policy, and military spending can have big, long-lasting consequences: this is a lesson that policy makers should never forget. The book also reminds us that in a world where there are hostile powers, we really don't want to get rid of spending on improving military technology. Those improvements can help at times when wars are necessary—for instance, when we are fighting against enemies with whom we cannot negotiate. Such enemies existed in the past—they were fighting for glory on the battlefield or victory over an enemy of the faith—and one could argue that they pose a threat today as well.

Things are much better if the conflict concerns something that can be split up—such as money or land. Then you can bargain with your enemies to divvy up whatever you disagree about and you can have something like peace. You'll still need to back up the peace with armed forces, but you won't actually fight all that much, and that's a much better outcome.

In either case, you'll still be spending money on the military and on military research. Personally, I would much rather see expenditures devoted to infrastructure, or scientific research, or free preschool for everybody—things that would carry big economic benefits—but in this world, I don't think you can stop doing military research or spending money on the military. I wish we did live in that world, but unfortunately it's not realistic.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Artificial Leaf Harnesses Sunlight for Efficient Fuel Production

Generating and storing renewable energy, such as solar or wind power, is a key barrier to a clean-energy economy. When the Joint Center for Artificial Photosynthesis (JCAP) was established at Caltech and its partnering institutions in 2010, the U.S. Department of Energy (DOE) Energy Innovation Hub had one main goal: a cost-effective method of producing fuels using only sunlight, water, and carbon dioxide, mimicking the natural process of photosynthesis in plants and storing energy in the form of chemical fuels for use on demand. Over the past five years, researchers at JCAP have made major advances toward this goal, and they now report the development of the first complete, efficient, safe, integrated solar-driven system for splitting water to create hydrogen fuels.

"This result was a stretch project milestone for the entire five years of JCAP as a whole, and not only have we achieved this goal, we also achieved it on time and on budget," says Caltech's Nate Lewis, George L. Argyros Professor and professor of chemistry, and the JCAP scientific director.

The new solar fuel generation system, or artificial leaf, is described in the August 27 online issue of the journal Energy and Environmental Science. The work was done by researchers in the laboratories of Lewis and Harry Atwater, director of JCAP and Howard Hughes Professor of Applied Physics and Materials Science.

"This accomplishment drew on the knowledge, insights and capabilities of JCAP, which illustrates what can be achieved in a Hub-scale effort by an integrated team," Atwater says. "The device reported here grew out of a multi-year, large-scale effort to define the design and materials components needed for an integrated solar fuels generator."


Solar Fuels Prototype in Operation
A fully integrated photoelectrochemical device performing unassisted solar water splitting for the production of hydrogen fuel. Credit: Erik Verlage and Chengxiang Xiang/Caltech

The new system consists of three main components: two electrodes—one photoanode and one photocathode—and a membrane. The photoanode uses sunlight to oxidize water molecules, generating protons and electrons as well as oxygen gas. The photocathode recombines the protons and electrons to form hydrogen gas. A key part of the JCAP design is the plastic membrane, which keeps the oxygen and hydrogen gases separate. If the two gases are allowed to mix and are accidentally ignited, an explosion can occur; the membrane lets the hydrogen fuel be separately collected under pressure and safely pushed into a pipeline.

Semiconductors such as silicon or gallium arsenide absorb light efficiently and are therefore used in solar panels. However, these materials also oxidize (or rust) on the surface when exposed to water, so cannot be used to directly generate fuel. A major advance that allowed the integrated system to be developed was previous work in Lewis's laboratory, which showed that adding a nanometers-thick layer of titanium dioxide (TiO2)—a material found in white paint and many toothpastes and sunscreens—onto the electrodes could prevent them from corroding while still allowing light and electrons to pass through. The new complete solar fuel generation system developed by Lewis and colleagues uses such a 62.5-nanometer-thick TiO2 layer to effectively prevent corrosion and improve the stability of a gallium arsenide–based photoelectrode.

Another key advance is the use of active, inexpensive catalysts for fuel production. The photoanode requires a catalyst to drive the essential water-splitting reaction. Rare and expensive metals such as platinum can serve as effective catalysts, but in its work the team discovered that it could create a much cheaper, active catalyst by adding a 2-nanometer-thick layer of nickel to the surface of the TiO2. This catalyst is among the most active known catalysts for splitting water molecules into oxygen, protons, and electrons and is a key to the high efficiency displayed by the device.

The photoanode was grown onto a photocathode, which also contains a highly active, inexpensive, nickel-molybdenum catalyst, to create a fully integrated single material that serves as a complete solar-driven water-splitting system.

A critical component that contributes to the efficiency and safety of the new system is the special plastic membrane that separates the gases and prevents the possibility of an explosion, while still allowing the ions to flow seamlessly to complete the electrical circuit in the cell. All of the components are stable under the same conditions and work together to produce a high-performance, fully integrated system. The demonstration system is approximately one square centimeter in area, converts 10 percent of the energy in sunlight into stored energy in the chemical fuel, and can operate for more than 40 hours continuously.

"This new system shatters all of the combined safety, performance, and stability records for artificial leaf technology by factors of 5 to 10 or more ," Lewis says.

"Our work shows that it is indeed possible to produce fuels from sunlight safely and efficiently in an integrated system with inexpensive components," Lewis adds, "Of course, we still have work to do to extend the lifetime of the system and to develop methods for cost-effectively manufacturing full systems, both of which are in progress."

Because the work assembled various components that were developed by multiple teams within JCAP, coauthor Chengxiang Xiang, who is co-leader of the JCAP prototyping and scale-up project, says that the successful end result was a collaborative effort. "JCAP's research and development in device design, simulation, and materials discovery and integration all funneled into the demonstration of this new device," Xiang says.

These results are published in a paper titled "A monolithically integrated, intrinsically safe, 10% efficient, solar-driven water-splitting system based on active, stable earth-abundant electrocatalysts in conjunction with tandem III-V light absorbers protected by amorphous TiO2 films." In addition to Lewis, Atwater, and Xiang, other Caltech coauthors include graduate student Erik Verlage, postdoctoral scholars Shu Hu and Ke Sun, material processing and integration research engineer Rui Liu, and JCAP mechanical engineer Ryan Jones. Funding was provided by the Office of Science at the U.S. Department of Energy, and the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech Chemists Solve Major Piece of Cellular Mystery

Team determines the architecture of a second subcomplex of the nuclear pore complex

Not just anything is allowed to enter the nucleus, the heart of eukaryotic cells where, among other things, genetic information is stored. A double membrane, called the nuclear envelope, serves as a wall, protecting the contents of the nucleus. Any molecules trying to enter or exit the nucleus must do so via a cellular gatekeeper known as the nuclear pore complex (NPC), or pore, that exists within the envelope.

How can the NPC be such an effective gatekeeper—preventing much from entering the nucleus while helping to shuttle certain molecules across the nuclear envelope? Scientists have been trying to figure that out for decades, at least in part because the NPC is targeted by a number of diseases, including some aggressive forms of leukemia and nervous system disorders such as a hereditary form of Lou Gehrig's disease. Now a team led by André Hoelz, assistant professor of biochemistry at Caltech, has solved a crucial piece of the puzzle.

In February of this year, Hoelz and his colleagues published a paper describing the atomic structure of the NPC's coat nucleoporin complex, a subcomplex that forms what they now call the outer rings (see illustration). Building on that work, the team has now solved the architecture of the pore's inner ring, a subcomplex that is central to the NPC's ability to serve as a barrier and transport facilitator. In order to the determine that architecture, which determines how the ring's proteins interact with each other, the biochemists built up the complex in a test tube and then systematically dissected it to understand the individual interactions between components. Then they validated that this is actually how it works in vivo, in a species of fungus.

For more than a decade, other researchers have suggested that the inner ring is highly flexible and expands to allow large macromolecules to pass through. "People have proposed some complicated models to explain how this might happen," says Hoelz. But now he and his colleagues have shown that these models are incorrect and that these dilations simply do not occur.

"Using an interdisciplinary approach, we solved the architecture of this subcomplex and showed that it cannot change shape significantly," says Hoelz. "It is a relatively rigid scaffold that is incorporated into the pore and basically just sits as a decoration, like pom-poms on a bicycle. It cannot dilate."

The new paper appears online ahead of print on August 27 in Science Express. The four co-lead authors on the paper are Caltech postdoctoral scholars Tobias Stuwe, Christopher J. Bley, and Karsten Thierbach, and graduate student Stefan Petrovic.


Crystal Structure of Fungal Channel Nucleoporin Complex
This video features a rotating three-dimensional crystal structure of the fungal channel nucleoporin complex bound to the adaptor nucleoporin Nic96. This interaction is the complex's sole site of attachment to the rest of the inner ring of the NPC. The channel nucleoporin complex borders the central transport channel and fills it with filamentous structures (phenylalanine-glycine repeats) that form a diffusion barrier and provide docking sites for proteins that ferry molecules across the nuclear envelope. Credit: Andre Hoelz/Caltech and Science

Together, the inner and outer rings make up the symmetric core of the NPC, a structure that includes 21 different proteins. The symmetric core is so named because of its radial symmetry (the two remaining subcomplexes of the NPC are specific to either the side that faces the cell's cytoplasm or the side that faces the nucleus and are therefore not symmetric). Having previously solved the structure of the coat nucleoporin complex and located it in the outer rings, the researchers knew that the remaining components that are not membrane anchored must make up the inner ring.

They started solving the architecture by focusing on the channel nucleoporin complex, or channel, which lines the central transport channel and is made up of three proteins, accounting for about half of the inner ring. This complex produces filamentous structures that serve as docking sites for specific proteins that ferry molecules across the nuclear envelope.

The biochemists employed bacteria to make the proteins associated with the inner ring in a test tube and mixed various combinations until they built the entire subcomplex. Once they had reconstituted the inner ring subcomplex, they were able to modify it to investigate how it is held together and which of its components are critical, and to determine how the channel is attached to the rest of the pore.

Hoelz and his team found that the channel is attached at only one site. This means that it cannot stretch significantly because such shape changes require multiple attachment points. Hoelz notes that a new electron microscopy study of the NPC published in 2013 by Martin Beck's group at the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany, indicated that the central channel is bigger than previously thought and wide enough to accommodate even the largest cargoes known to pass through the pore.

When the researchers introduced mutations that effectively eliminated the channel's single attachment, the complex could no longer be incorporated into the inner ring. After proving this in the test tube, they also showed this to be true in living cells.

"This whole complex is a very complicated machine to assemble. The cool thing here is that nature has found an elegant way to wait until the very end of the assembly of the nuclear pore to incorporate the channel," says Hoelz. "By incorporating the channel, you establish two things at once: you immediately form a barrier and you generate the ability for regulated transport to occur through the pore." Prior to the channel's incorporation, there is simply a hole through which macromolecules can freely pass.

Next, Hoelz and his colleagues used X-ray crystallography to determine the structure of the channel nucleoporin subcomplex bound to the adaptor nucleoporin Nic96, which is its only nuclear pore attachment site. X-ray crystallography involves shining X-rays on a crystallized sample and analyzing the pattern of rays reflected off the atoms in the crystal. Because the NPC is a large and complex molecular machine that also has many moving parts, they used an engineered antibody to essentially "superglue" many copies of the complex into place to form a nicely ordered crystalline sample. Then they analyzed hundreds of samples using Caltech's Molecular Observatory—a facility developed with support from the Gordon and Betty Moore Foundation that includes an automated X-ray beam line at the Stanford Synchrotron Radiation Laboratory that can be controlled remotely from Caltech—and the GM/CA beam line at the Advanced Photon Source at the Argonne National Laboratory. Eventually, they were able to determine the size, shape, and position of all the atoms of the channel nucleoporin subcomplex and its location within the full NPC.

"The crystal structure nailed it," Hoelz says. "There is no way that the channel is changing shape. All of that other work that, for more than 10 years, suggested it was dilating was wrong."

The researchers also solved a number of crystal structures from other parts of the NPC and determined how they interact with components of the inner ring. In doing so they demonstrated that one such interaction is critical for positioning the channel in the center of the inner ring. They found that exact positioning is needed for the proper export from the nucleus of mRNA and components of ribosomes, the cell's protein-making complexes, rendering it critical in the flow of genetic information from DNA to mRNA to protein.

Hoelz adds that now that the architectures of the inner and outer rings of the NPC are known, getting an atomic structure of the entire symmetric core is "a sprint to the summit."

"When I started at Caltech, I thought it might take another 10, 20 years to do this," he says. "In the end, we have really only been working on this for four and a half years, and the thing is basically tackled. I want to emphasize that this kind of work is not doable everywhere. The people who worked on this are truly special, talented, and smart; and they worked day and night on this for years."

Ultimately, Hoelz says he would like to understand how the NPC works in great detail so that he might be able to generate therapies for diseases associated with the dysfunction of the complex. He also dreams of building up an entire pore in the test tube so that he can fully study it and understand what happens as it is modified in various ways. "Just as they did previously when I said that I wanted to solve the atomic structure of the nuclear pore, people will say that I'm crazy for trying to do this," he says. "But if we don't do it, it is likely that nobody else will."

The paper, "Architecture of the fungal nuclear pore inner ring complex," had a number of additional Caltech authors: Sandra Schilbach (now of the Max Planck Institute of Biophysical Chemistry), Daniel J. Mayo, Thibaud Perriches, Emily J. Rundlet, Young E. Jeon, Leslie N. Collins, Ferdinand M. Huber, and Daniel H. Lin. Additional coauthors include Marcin Paduch, Akiko Koide, Vincent Lu, Shohei Koide, and Anthony A. Kossiakoff of the University of Chicago; and Jessica Fischer and Ed Hurt of Heidelberg University.

 

 

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

After a Half Century, the Exotic Pentaquark Particle is Found

In July, scientists at the Large Hadron Collider (LHC) reported the discovery of the pentaquark, a long-sought particle first predicted to exist in the 1960s as a consequence of the theory of elementary particles and their interactions proposed by Murray Gell-Mann, Caltech's Robert Andrews Millikan Professor of Theoretical Physics, Emeritus.

In work for which he won the Nobel Prize in Physics in 1969, Gell-Mann introduced the concept of the quark—a fundamental building block of matter. Quarks come in six types, known as "flavors": up, down, top, bottom, strange, and charm. As described in his model, groups of quarks combine into composite particles called hadrons. Combining a quark and an antiquark (a quark's antimatter equivalent) creates a type of hadron called a meson, while baryons are hadrons composed of three quarks. Protons, for example, have two up quarks and one down quark, while neutrons have one up and two down quarks. Gell-Mann's scheme also allowed for more exotic forms of composite particles, including tetraquarks, made of four quarks, and the pentaquark, consisting of four quarks and an antiquark.

The pentaquark was detected at the LHC—the most powerful particle accelerator on Earth—by scientists carrying out the "beauty" experiment, or LHCb. The LHC accelerates protons around a ring almost five miles wide to nearly the speed of light, producing two proton beams that careen toward each other. A small fraction of the protons collide, creating other particles in the process. During investigations of the behavior of one such particle, an unstable three-quark object known as the bottom lambda baryon that decays quickly once formed, LHCb researchers observed unusually heavy objects, each with about 4.5 times the mass of a proton. After further analysis, the researchers concluded that the objects were pentaquarks composed of two up quarks, one down quark, one charm quark, and one anticharm quark. A paper describing the discovery has been published in the journal Physical Review Letters.

It is thought that pentaquarks and other exotic particles may form naturally in violent environments such as exploding stars and would have been created during the Big Bang. A better understanding of these complex arrangements of quarks could offer insight into the forces that hold together all matter as well as the earliest moments of the universe.

"This is part of a long process of discovery of particle states," said Gell-Mann in a statement released by the Santa Fe Institute, where he currently is a Distinguished Fellow. "[In the future] they may find more and more of them, made of quarks and antiquarks and various combinations."

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech Announces Discovery in Fundamental Physics

When the transistor was invented in 1947 at Bell Labs, few could have foreseen the future impact of the device. This fundamental development in science and engineering was critical to the invention of handheld radios, led to modern computing, and enabled technologies such as the smartphone. This is one of the values of basic research.

In a similar fashion, a branch of fundamental physics research, the study of so-called correlated electrons, focuses on interactions between the electrons in metals.

The key to understanding these interactions and the unique properties they produce—information that could lead to the development of novel materials and technologies—is to experimentally verify their presence and physically probe the interactions at microscopic scales. To this end, Caltech's Thomas F. Rosenbaum and colleagues at the University of Chicago and the Argonne National Laboratory recently used a synchrotron X-ray source to investigate the existence of instabilities in the arrangement of the electrons in metals as a function of both temperature and pressure, and to pinpoint, for the first time, how those instabilities arise. Rosenbaum, professor of physics and holder of the Sonja and William Davidow Presidential Chair, is the corresponding author on the paper that was published on July 27, 2015, in the journal Nature Physics.

"We spent over 10 years developing the instrumentation to perform these studies," says Yejun Feng of Argonne National Laboratory, a coauthor of the paper. "We now have a very unique capability that's due to the long-term relationship between Dr. Rosenbaum and the facilities at the Argonne National Laboratory."

Within atoms, electrons are organized into orbital shells and subshells. Although they are often depicted as physical entities, orbitals actually represent probability distributions—regions of space where electrons have a certain likelihood of being found in a particular element at a particular energy. The characteristic electron configuration of a given element explains that element's peculiar properties.

The work in correlated electrons looks at a subset of electrons. Metals, as an example, have an unfilled outermost orbital and electrons are free to move from atom to atom. Thus, metals are good electrical conductors. When metal atoms are tightly packed into lattices (or crystals) these electrons mingle together into a "sea" of electrons. The metallic element mercury is liquid at room temperature, in part due to its electron configuration, and shows very little resistance to electric current due to its electron configuration. At 4 degrees above absolute zero (just barely above -460 degrees Fahrenheit), mercury's electron arrangement and other properties create communal electrons that show no resistance to electric current, a state known as superconductivity.

Mercury's superconductivity and similar phenomena are due to the existence of many pairs of correlated electrons. In superconducting states, correlated electrons pair to form an elastic, collective state through an excitation in the crystal lattice known as a phonon (specifically, a periodic, collective excitation of the atoms). The electrons are then able to move cooperatively in the elastic state through a material without energy loss.

Electrons in crystals can interact in many ways with the periodic structure of the underlying atoms. Sometimes the electrons modulate themselves periodically in space. The question then arises as to whether this "charge order" derives from the interactions of the electrons with the atoms, a theory first proposed more than 60 years ago, or solely from interactions among the sea of electrons themselves. This question was the focus of the Nature Physics study. Electrons also behave as microscopic magnets and can demonstrate "spin order," which raises similar questions about the origin of the local magnetism.

To see where the charge order arises, the researchers turned to the Advanced Photon Source at Argonne. The Photon Source is a synchrotron (a relative of the cyclotron, commonly known as an "atom-smasher"). These machines generate intense X-ray beams that can be used for X-ray diffraction studies. In X-ray diffraction, the patterns of scattered X-rays are used to provide information about repeating structures with wavelengths at the atomic scale.

In the experiment, the researchers used the X-ray beams to investigate charge-order effects in two metals, chromium and niobium diselenide, at pressures ranging from 0 (a vacuum) to 100 kilobar (100,000 times normal atmospheric pressure) and at temperatures ranging from 3 to 300 K (or -454 to 80 degrees Fahrenheit). Niobium diselenide was selected because it has a high degree of charge order, while chromium, in contrast, has a high degree of spin order. 

The researchers found that there is a simple correlation between pressure and how the communal electrons organize themselves within the crystal. Materials with completely different types of crystal structures all behave similarly. "These sorts of charge- and spin-order phenomena have been known for a long time, but their underlying mechanisms have not been understood until now," says Rosenbaum.

Paper coauthors Jasper van Wezel, formerly of Argonne National Laboratory and presently of the Institute for Theoretical Physics at the University of Amsterdam, and Peter Littlewood, a professor at the University of Chicago and the director of Argonne National Laboratory, helped to provide a new theoretical perspective to explain the experimental results.

Rosenbaum and colleagues point out that there are no immediate practical applications of the results. However, Rosenbaum notes, "This work should have applicability to new materials as well as to the kind of interactions that are useful to create magnetic states that are often the antecedents of superconductors," says Rosenbaum.

"The attraction of this sort of research is to ask fundamental questions that are ubiquitous in nature," says Rosenbaum. "I think it is very much a Caltech tradition to try to develop new tools that can interrogate materials in ways that illuminate the fundamental aspects of the problem." He adds, "There is real power in being able to have general microscopic insights to develop the most powerful breakthroughs."

The coauthors on the paper, titled "Itinerant density wave instabilities at classical and quantum critical points," are Yejun Feng and Peter Littlewood of the Argonne National Laboratory, Jasper van Wezel of the University of Amsterdam, Daniel M. Silevitch and Jiyang Wang of the University of Chicago, and Felix Flicker of the University of Bristol. Work performed at the Argonne National Laboratory was supported by the U.S. Department of Energy. Work performed at the University of Chicago was funded by the National Science Foundation. Additional support was received from the Netherlands Organization for Scientific Research.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech-Led Team Looks in Detail at the April 2015 Earthquake in Nepal

For more than 20 years, Caltech geologist Jean-Philippe Avouac has collaborated with the Department of Mines and Geology of Nepal to study the Himalayas—the most active, above-water mountain range on Earth—to learn more about the processes that build mountains and trigger earthquakes. Over that period, he and his colleagues have installed a network of GPS stations in Nepal that allows them to monitor the way Earth's crust moves during and in between earthquakes. So when he heard on April 25 that a magnitude 7.8 earthquake had struck near Gorkha, Nepal, not far from Kathmandu, he thought he knew what to expect—utter devastation throughout Kathmandu and a death toll in the hundreds of thousands.

"At first when I saw the news trickling in from Kathmandu, I thought there was a problem of communication, that we weren't hearing the full extent of the damage," says Avouac, Caltech's Earle C. Anthony Professor of Geology. "As it turns out, there was little damage to the regular dwellings, and thankfully, as a result, there were far fewer deaths than I originally anticipated."

Using data from the GPS stations, an accelerometer that measures ground motion in Kathmandu, data from seismological stations around the world, and radar images collected by orbiting satellites, an international team of scientists led by Caltech has pieced together the first complete account of what physically happened during the Gorkha earthquake—a picture that explains how the large earthquake wound up leaving the majority of low-story buildings unscathed while devastating some treasured taller structures.

The findings are described in two papers that now appear online. The first, in the journal Nature Geoscience, is based on an analysis of seismological records collected more than 1,000 kilometers from the epicenter and places the event in the context of what scientists knew of the seismic setting near Gorkha before the earthquake. The second paper, appearing in Science Express, goes into finer detail about the rupture process during the April 25 earthquake and how it shook the ground in Kathmandu.


Build Up and Release of Strain on Himalaya Megathrust (caption and credit in video attached in upper right)

In the first study, the researchers show that the earthquake occurred on the Main Himalayan Thrust (MHT), the main megathrust fault along which northern India is pushing beneath Eurasia at a rate of about two centimeters per year, driving the Himalayas upward. Based on GPS measurements, scientists know that a large portion of this fault is "locked." Large earthquakes typically release stress on such locked faults—as the lower tectonic plate (here, the Indian plate) pulls the upper plate (here, the Eurasian plate) downward, strain builds in these locked sections until the upper plate breaks free, releasing strain and producing an earthquake. There are areas along the fault in western Nepal that are known to be locked and have not experienced a major earthquake since a big one (larger than magnitude 8.5) in 1505. But the Gorkha earthquake ruptured only a small fraction of the locked zone, so there is still the potential for the locked portion to produce a large earthquake.

"The Gorkha earthquake didn't do the job of transferring deformation all the way to the front of the Himalaya," says Avouac. "So the Himalaya could certainly generate larger earthquakes in the future, but we have no idea when."

The epicenter of the April 25 event was located in the Gorkha District of Nepal, 75 kilometers to the west-northwest of Kathmandu, and propagated eastward at a rate of about 2.8 kilometers per second, causing slip in the north-south direction—a progression that the researchers describe as "unzipping" a section of the locked fault.

"With the geological context in Nepal, this is a place where we expect big earthquakes. We also knew, based on GPS measurements of the way the plates have moved over the last two decades, how 'stuck' this particular fault was, so this earthquake was not a surprise," says Jean Paul Ampuero, assistant professor of seismology at Caltech and coauthor on the Nature Geoscience paper. "But with every earthquake there are always surprises."


Propagation of April 2015 Mw 7.8 Gorkha Earthquake (caption and credit in video attached in upper right)

In this case, one of the surprises was that the quake did not rupture all the way to the surface. Records of past earthquakes on the same fault—including a powerful one (possibly as strong as magnitude 8.4) that shook Kathmandu in 1934—indicate that ruptures have previously reached the surface. But Avouac, Ampuero, and their colleagues used satellite Synthetic Aperture Radar data and a technique called back projection that takes advantage of the dense arrays of seismic stations in the United States, Europe, and Australia to track the progression of the earthquake, and found that it was quite contained at depth. The high-frequency waves that were largely produced in the lower section of the rupture occurred at a depth of about 15 kilometers.

"That was good news for Kathmandu," says Ampuero. "If the earthquake had broken all the way to the surface, it could have been much, much worse."

The researchers note, however, that the Gorkha earthquake did increase the stress on the adjacent portion of the fault that remains locked, closer to Kathmandu. It is unclear whether this additional stress will eventually trigger another earthquake or if that portion of the fault will "creep," a process that allows the two plates to move slowly past one another, dissipating stress. The researchers are building computer models and monitoring post-earthquake deformation of the crust to try to determine which scenario is more likely.

Another surprise from the earthquake, one that explains why many of the homes and other buildings in Kathmandu were spared, is described in the Science Express paper. Avouac and his colleagues found that for such a large-magnitude earthquake, high-frequency shaking in Kathmandu was actually relatively mild. And it is high-frequency waves, with short periods of vibration of less than one second, that tend to affect low-story buildings. The Nature Geoscience paper showed that the high-frequency waves that the quake produced came from the deeper edge of the rupture, on the northern end away from Kathmandu.

The GPS records described in the Science Express paper show that within the zone that experienced the greatest amount of slip during the earthquake—a region south of the sources of high-frequency waves and closer to Kathmandu—the onset of slip on the fault was actually very smooth. It took nearly two seconds for the slip rate to reach its maximum value of one meter per second. In general, the more abrupt the onset of slip during an earthquake, the more energetic the radiated high-frequency seismic waves. So the relatively gradual onset of slip in the Gorkha event explains why this patch, which experienced a large amount of slip, did not generate many high-frequency waves.

"It would be good news if the smooth onset of slip, and hence the limited induced shaking, were a systematic property of the Himalayan megathrust fault, or of megathrust faults in general." says Avouac. "Based on observations from this and other megathrust earthquakes, this is a possibility."

In contrast to what they saw with high-frequency waves, the researchers found that the earthquake produced an unexpectedly large amount of low-frequency waves with longer periods of about five seconds. This longer-period shaking was responsible for the collapse of taller structures in Kathmandu, such as the Dharahara Tower, a 60-meter-high tower that survived larger earthquakes in 1833 and 1934 but collapsed completely during the Gorkha quake.

To understand this, consider plucking the strings of a guitar. Each string resonates at a certain natural frequency, or pitch, depending on the length, composition, and tension of the string. Likewise, buildings and other structures have a natural pitch or frequency of shaking at which they resonate; in general, the taller the building, the longer the period at which it resonates. If a strong earthquake causes the ground to shake with a frequency that matches a building's pitch, the shaking will be amplified within the building, and the structure will likely collapse.

Turning to the GPS records from two of Avouac's stations in the Kathmandu Valley, the researchers found that the effect of the low-frequency waves was amplified by the geological context of the Kathmandu basin. The basin is an ancient lakebed that is now filled with relatively soft sediment. For about 40 seconds after the earthquake, seismic waves from the quake were trapped within the basin and continued to reverberate, ringing like a bell with a frequency of five seconds.

"That's just the right frequency to damage tall buildings like the Dharahara Tower because it's close to their natural period," Avouac explains.

In follow-up work, Domniki Asimaki, professor of mechanical and civil engineering at Caltech, is examining the details of the shaking experienced throughout the basin. On a recent trip to Kathmandu, she documented very little damage to low-story buildings throughout much of the city but identified a pattern of intense shaking experienced at the edges of the basin, on hilltops or in the foothills where sediment meets the mountains. This was largely due to the resonance of seismic waves within the basin.

Asimaki notes that Los Angeles is also built atop sedimentary deposits and is surrounded by hills and mountain ranges that would also be prone to this type of increased shaking intensity during a major earthquake.

"In fact," she says, "the buildings in downtown Los Angeles are much taller than those in Kathmandu and therefore resonate with a much lower frequency. So if the same shaking had happened in L.A., a lot of the really tall buildings would have been challenged."

That points to one of the reasons it is important to understand how the land responded to the Gorkha earthquake, Avouac says. "Such studies of the site effects in Nepal provide an important opportunity to validate the codes and methods we use to predict the kind of shaking and damage that would be expected as a result of earthquakes elsewhere, such as in the Los Angeles Basin."

Additional authors on the Nature Geoscience paper, "Lower edge of locked Main Himalayan Thrust unzipped by the 2015 Gorkha earthquake," are Lingsen Meng (PhD '12) of UC Los Angeles, Shengji Wei of Nanyang Technological University in Singapore, and Teng Wang of Southern Methodist University. The lead author on the Science paper, "Slip pulse and resonance of Kathmandu basin during the 2015 Mw 7.8 Gorkha earthquake, Nepal imaged with geodesy" is John Galetzka, formerly an associate staff geodesist at Caltech and now a project manager at UNAVCO in Boulder, Colorado. Caltech research geodesist Joachim Genrich is also a coauthor, as are Susan Owen and Angelyn Moore of JPL. For a full list of authors, please see the paper.

The Nepal Geodetic Array was funded by Caltech, the Gordon and Betty Moore Foundation, and the National Science Foundation. Additional funding for the Science study came from the Department of Foreign International Development (UK), the Royal Society (UK), the United Nations Development Programme, and the Nepal Academy for Science and Technology, as well as NASA and the Department of Foreign International Development.

Writer: 
Kimm Fesenmaier
Home Page Title: 
Details of the April 2015 Earthquake in Nepal
Listing Title: 
Details of the April 2015 Earthquake in Nepal
Writer: 
Exclude from News Hub: 
No
Short Title: 
Details of the April 2015 Earthquake in Nepal
News Type: 
Research News

Caltech Astronomers Unveil a Distant Protogalaxy Connected to the Cosmic Web

A team of astronomers led by Caltech has discovered a giant swirling disk of gas 10 billion light-years away—a galaxy-in-the-making that is actively being fed cool primordial gas tracing back to the Big Bang. Using the Caltech-designed and -built Cosmic Web Imager (CWI) at Palomar Observatory, the researchers were able to image the protogalaxy and found that it is connected to a filament of the intergalactic medium, the cosmic web made of diffuse gas that crisscrosses between galaxies and extends throughout the universe.

The finding provides the strongest observational support yet for what is known as the cold-flow model of galaxy formation. That model holds that in the early universe, relatively cool gas funneled down from the cosmic web directly into galaxies, fueling rapid star formation.

A paper describing the finding and how CWI made it possible currently appears online and will be published in the August 13 print issue of the journal Nature.

"This is the first smoking-gun evidence for how galaxies form," says Christopher Martin, professor of physics at Caltech, principal investigator on CWI, and lead author of the new paper. "Even as simulations and theoretical work have increasingly stressed the importance of cold flows, observational evidence of their role in galaxy formation has been lacking."


Caltech Astronomers Discuss Findings on Galaxy Formation

The protogalactic disk the team has identified is about 400,000 light-years across—about four times larger in diameter than our Milky Way. It is situated in a system dominated by two quasars, the closest of which, UM287, is positioned so that its emission is beamed like a flashlight, helping to illuminate the cosmic web filament feeding gas into the spiraling protogalaxy.

Last year, Sebastiano Cantalupo, then of UC Santa Cruz (now of ETH Zurich) and his colleagues published a paper, also in Nature, announcing the discovery of what they thought was a large filament next to UM287. The feature they observed was brighter than it should have been if indeed it was only a filament. It seemed that there must be something else there.

In September 2014, Martin and his colleagues, including Cantalupo, decided to follow up with observations of the system with CWI. As an integral field spectrograph, CWI allowed the team to collect images around UM287 at hundreds of different wavelengths simultaneously, revealing details of the system's composition, mass distribution, and velocity.

Martin and his colleagues focused on a range of wavelengths around an emission line in the ultraviolet known as the Lyman-alpha line. That line, a fingerprint of atomic hydrogen gas, is commonly used by astronomers as a tracer of primordial matter.

The researchers collected a series of spectral images that combined to form a multiwavelength map of a patch of sky around the two quasars. This data delineated areas where gas is emitting in the Lyman-alpha line, and indicated the velocities with which this gas is moving with respect to the center of the system.

"The images plainly show that there is a rotating disk—you can see that one side is moving closer to us and the other is moving away. And you can also see that there's a filament that extends beyond the disk," Martin says. Their measurements indicate that the disk is rotating at a rate of about 400 kilometers per second, somewhat faster than the Milky Way's own rate of rotation.

"The filament has a more or less constant velocity. It is basically funneling gas into the disk at a fixed rate," says Matt Matuszewski (PhD '12), an instrument scientist in Martin's group and coauthor on the paper. "Once the gas merges with the disk inside the dark-matter halo, it is pulled around by the rotating gas and dark matter in the halo." Dark matter is a form of matter that we cannot see that is believed to make up about 27 percent of the universe. Galaxies are thought to form within extended halos of dark matter.

The new observations and measurements provide the first direct confirmation of the so-called cold-flow model of galaxy formation.

Hotly debated since 2003, that model stands in contrast to the standard, older view of galaxy formation. The standard model said that when dark-matter halos collapse, they pull a great deal of normal matter in the form of gas along with them, heating it to extremely high temperatures. The gas then cools very slowly, providing a steady but slow supply of cold gas that can form stars in growing galaxies.

That model seemed fine until 1996, when Chuck Steidel, Caltech's Lee A. DuBridge Professor of Astronomy, discovered a distant population of galaxies producing stars at a very high rate only two billion years after the Big Bang. The standard model cannot provide the prodigious fuel supply for these rapidly forming galaxies.

The cold-flow model provided a potential solution. Theorists suggested that relatively cool gas, delivered by filaments of the cosmic web, streams directly into protogalaxies. There, it can quickly condense to form stars. Simulations show that as the gas falls in, it contains tremendous amounts of angular momentum, or spin, and forms extended rotating disks.

"That's a direct prediction of the cold-flow model, and this is exactly what we see—an extended disk with lots of angular momentum that we can measure," says Martin.

Phil Hopkins, assistant professor of theoretical astrophysics at Caltech, who was not involved in the study, finds the new discovery "very compelling."

"As a proof that a protogalaxy connected to the cosmic web exists and that we can detect it, this is really exciting," he says. "Of course, now you want to know a million things about what the gas falling into galaxies is actually doing, so I'm sure there is going to be more follow up."

Martin notes that the team has already identified two additional disks that appear to be receiving gas directly from filaments of the cosmic web in the same way.

Additional Caltech authors on the paper, "A giant protogalactic disk linked to the cosmic web," are principal research scientist Patrick Morrissey, research scientist James D. Neill, and instrument scientist Anna Moore from the Caltech Optical Observatories. J. Xavier Prochaska of UC Santa Cruz and former Caltech graduate student Daphne Chang, who is deceased, are also coauthors. The Cosmic Web Imager was funded by grants from the National Science Foundation and Caltech.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Teaser Image: 

Pages