rpyle's picture

Advanced LIGO to Begin Operations

The Advanced LIGO begins operations this week, after 7 years of enhancement.

The Advanced LIGO Project, a major upgrade of the Laser Interferometer Gravitational-Wave Observatory, is completing its final preparations before the initiation of scientific observations, scheduled to begin in mid-September. Designed to observe gravitational waves—ripples in the fabric of space and time—LIGO, which was designed and is operated by Caltech and MIT with funding from the National Science Foundation (NSF), consists of identical detectors in Livingston, Louisiana, and Hanford, Washington.

"The LIGO scientific and engineering team at Caltech and MIT has been leading the effort over the past seven years to build Advanced LIGO, the world's most sensitive gravitational-wave detector," says David Reitze, the executive director of the LIGO program at Caltech. Groups from the international LIGO Scientific Collaboration also contributed to the design and construction of the Advanced LIGO detector.

Gravitational waves were predicted by Albert Einstein in 1916 as a consequence of his general theory of relativity, and are emitted by violent events in the universe such as exploding stars and colliding black holes. These waves carry information not only about the objects that produce them, but also about the nature of gravity in extreme conditions that cannot be obtained by other astronomical tools.

"Experimental attempts to find gravitational waves have been on going for over 50 years, and they haven't yet been found. They're both very rare and possess signal amplitudes that are exquisitely tiny," Reitze says.

Although earlier LIGO runs revealed no detections, Advanced LIGO, also funded by the NSF, increases the sensitivity of the observatories by a factor of 10, resulting in a thousandfold increase in observable candidate objects. "The first Advanced LIGO science run will take place with interferometers that can 'see' events more than three times further than the initial LIGO detector," adds David Shoemaker, the MIT Advanced LIGO project leader, "so we'll be probing a much larger volume of space."

Each of the 4-kilometer-long L-shaped LIGO interferometers uses a laser beam split into two beams that travel back and forth through the long arms, within tubes from which the air has been evacuated. The beams are used to monitor the distance between precisely configured mirrors. According to Einstein's theory, the relative distance between the mirrors will change very slightly when a gravitational wave passes by.

The original configuration of LIGO was sensitive enough to detect a change in the lengths of the 4-kilometer arms by a distance one-thousandth the diameter of a proton; this is like accurately measuring the distance from Earth to the nearest star—over four light-years—to within the width of a human hair. Advanced LIGO, which will utilize the infrastructure of LIGO, is much more powerful.

While earlier LIGO observing runs did not confirm the existence of gravitational waves, the influence of such waves has been measured indirectly via observations of a binary system called PSR B1913+6. The system consists of two objects, both neutron stars—the compact cores of dead stars—that orbit a common center of mass. The orbits of these two stellar bodies have been observed to be slowly contracting due to the energy that is lost to gravitational radiation. Binary star systems such as these that are in the very last stages of evolution—just before and during the inevitable collision of the two objects—are key targets of the planned observing schedule for Advanced LIGO.

"Ultimately, Advanced LIGO will be able to see 10 times as far as initial LIGO and, based on theoretical predictions, should detect many binary neutron star mergers per year," Reitze says.

The improved instruments will be able to look at the last minutes of the life of pairs of massive black holes as they spiral closer together, coalesce into one larger black hole, and then vibrate much like two soap bubbles becoming one. Advanced LIGO also will be able to pinpoint periodic signals from the many known pulsars that radiate in the range of 10 to 1,000 Hertz (frequencies that roughly correspond to low and high notes on an organ). In addition, Advanced LIGO will be used to search for the gravitational cosmic background, allowing tests of theories about the development of the universe only 10-35 seconds after the Big Bang.

"We expect it will take five years to fully optimize the detector performance and achieve our design sensitivity," Reitze says. "It has been a long road, and we're very excited to resume the hunt for gravitational waves."

Writer: 
Rod Pyle
Home Page Title: 
Advanced LIGO to Begin Operations
Listing Title: 
Advanced LIGO to Begin Operations
Writer: 
Exclude from News Hub: 
No
Short Title: 
Advanced LIGO Begins Operation
News Type: 
Research News

Bar-Coding Technique Opens Up Studies Within Single Cells

All of the cells in a particular tissue sample are not necessarily the same—they can vary widely in terms of genetic content, composition, and function. Yet many studies and analytical techniques aimed at understanding how biological systems work at the cellular level treat all of the cells in a tissue sample as identical, averaging measurements over the entire cellular population. It is easy to see why this happens. With the cell's complex matrix of organelles, signaling chemicals, and genetic material—not to mention its miniscule scale—zooming in to differentiate what is happening within each individual cell is no trivial task.

"But being able to do single-cell analysis is crucial to understanding a lot of biological systems," says Long Cai, assistant professor of chemistry at Caltech. "This is true in brains, in biofilms, in embryos . . . you name it."

Now Cai's lab has developed a method for simultaneously imaging and identifying dozens of molecules within individual cells. This technique could offer new insight into how cells are organized and interact with each other and could eventually improve our understanding of many diseases.

The imaging technique that Cai and his colleagues have developed allows researchers not only to resolve a large number of molecules—such as messenger RNA species (mRNAs)—within a single cell, but also to systematically label each type of molecule with its own unique fluorescent "bar code" so it can be readily identified and measured without damaging the cell.

"Using this technique, there is essentially no limit on how many different types of molecules you can detect within a single cell," explains Cai.

The new method uses an innovative sequential bar-coding scheme that takes fluorescence in situ hybridization (FISH), a well-known procedure for detecting specific sequences of DNA or RNA in a sample, to the next level. Cai and his colleagues have dubbed their technique FISH Sequential Coding anALYSis (FISH SCALYS). 

FISH makes use of molecular probes—short fragments of DNA bound to fluorescent dyes, or fluorophores. These probes bind, or hybridize, to DNA or RNA with complementary sequences. When a hybridized sample is imaged with microscopy, the fluorophore lights up, pinpointing the target molecule's location.

There are a handful of fluorophores that can be used in these probes, and researchers typically use them to identify only a few different genes. For example, they will use a red dye to label all of the probes that target a specific type of mRNA. And when they image the sample, they will see a bunch of red dots in the cell. Then they will take another set of probes that target a different type of mRNA, label them with a blue fluorophore, and see glowing blue spots. And so on.

But what if a researcher wants to image more types of molecules than there are fluorophores? In the past, they have tried to mix the dyes together, making both red and blue probes for a particular gene, so that when both probes bind to the gene, the resulting dot would look purple. It was an imperfect solution and could still only label about 30 different types of molecules.

Cai's team realized that the same handful of fluorophores could be used in sequential rounds of hybridization to create thousands of unique fluorescent bar codes that could clearly identify many types of molecules (see graphic at right).

"With our technique, each tagged molecule remains just one single color in each round but we build up a bar code through multiple rounds, so the colors remain distinguishable. Using additional colors and extra rounds of hybridization, you can scale up easily to identify tens of thousands of different molecules," says Cai.

The number of bar codes available is potentially immense: FN, where F is the number of fluorophores and N is the number of rounds of hybridization. So with four dyes and eight rounds of hybridization, scientists would have more than enough bar codes (48=65,536) to cover all of the approximately 20,000 RNA molecules in a cell.

Cai says FISH SCALYS could be used to determine molecular identities of various types of cells, including embryonic stem cells. "One subset of genes will be turned on for one type of cell and off for another," he explains. It could also provide insight into the way that diseases alter cells, allowing researchers to compare the expression differences for a large number of genes in normal tissue versus diseased tissue.

Cai has recently been funded by the McKnight Endowment Fund for Neuroscience to adapt the technique to identify different types of neurons in samples from the hippocampus, a part of the brain associated with memory and learning.

Cai is also leading a program through Caltech's Beckman Institute that is helping other researchers on campus apply the imaging method to diverse biological questions.

Cai and his team describe the technique in a Nature Methods paper titled "Single-cell in situ RNA profiling by sequential hybridization." Caltech graduate student Eric Lubeck and postdoctoral scholar Ahmet Coskun are lead authors on the paper. Additional coauthors include Timur Zhiyentayev, a former Caltech graduate student, and Mubhij Ahmad, a former research technician in the Cai lab. The work has been funded by the National Institutes of Health's Single Cell Analysis Program.

Writer: 
Kimm Fesenmaier
Home Page Title: 
Scaling Up Molecular Detection in Single Cells
Listing Title: 
Scaling Up Molecular Detection in Single Cells
Writer: 
Exclude from News Hub: 
No
Short Title: 
Scaling Up Molecular Detection in Single Cells
News Type: 
Research News

An Antibody That Can Attack HIV in New Ways

Proteins called broadly neutralizing antibodies (bNAbs) are a promising key to the prevention of infection by HIV, the virus that causes AIDS. bNAbs have been found in blood samples from some HIV patients whose immune systems can naturally control the infection. These antibodies may protect a patient's healthy cells by recognizing a protein called the envelope spike, present on the surface of all HIV strains and inhibiting, or neutralizing, the effects of the virus. Now Caltech researchers have discovered that one particular bNAb may be able to recognize this signature protein, even as it takes on different conformations during infection—making it easier to detect and neutralize the viruses in an infected patient.

The work, from the laboratory of Pamela Bjorkman, Centennial Professor of Biology, was published in the September 10 issue of the journal Cell.

The process of HIV infection begins when the virus comes into contact with human immune cells called T cells that carry a particular protein, CD4, on their surface. Three-part (or "trimer") proteins called envelope spikes on the surface of the virus recognize and bind to the CD4 proteins. The spikes can be in either a closed or an open conformation, going from closed to open when the spike binds to CD4. The open conformation then triggers fusion of the virus with the target cell, allowing the HIV virus to deposit its genetic material inside the host cell, forcing it to become a factory for making new viruses that can go on to infect other cells.

The bNAbs recognize the envelope spike on the surface of HIV, and most known bNAbs only recognize the spike in the closed conformation. Although the only target of neutralizing antibodies is the envelope spike, each bNAb actually functions by recognizing just one specific target, or epitope, on this protein. Some targets allow more effective neutralization of the virus, and, therefore, some bNAbs are more effective against HIV than others. In 2014, Bjorkman and her collaborators at Rockefeller University reported initial characterization of a potent bNAb called 8ANC195 in the blood of HIV patients whose immune systems could naturally control their infections. They also discovered that this antibody could neutralize the HIV virus by targeting a different epitope than any other previously identified bNAb.

In the work described in the recent Cell paper, they investigated how 8ANC195 functions—and how its unique properties could be beneficial for HIV therapies.

"In Pamela's lab we use X-ray crystallography and electron microscopy to study protein–protein interactions on a molecular level," says Louise Scharf, a postdoctoral scholar in Bjorkman's laboratory and the first author on the paper. "We previously were able to define the binding site of this antibody on a subunit of the HIV envelope spike, so in this study we solved the three-dimensional structure of this antibody in complex with the entire spike, and showed in detail exactly how the antibody recognizes the virus."

What they found was that although most bNAbs recognize the envelope spike in its closed conformation, 8ANC195 could recognize the viral protein in both the closed conformation and a partially open conformation. "We think it's actually an advantage if the antibody can recognize these different forms," Scharf says.

The most common form of HIV infection is when a virus in the bloodstream attaches to a T cell and infects the cell. In this instance, the spikes on the free-floating virus would be predominantly in the closed conformation until they made contact with the host cell. Most bNAbs could neutralize this virus. However, HIV also can spread directly from one cell to another. In this case, because the antibody already is attached to the host cell, the spike is in an open conformation. But 8ANC195 could still recognize and attach to it.

A potential medical application of this antibody is in so-called combination therapies, in which a patient is given a cocktail of several antibodies that work in different ways to fight off the virus as it rapidly changes and evolves. "Our collaborators at Rockefeller have studied this extensively in animal models, showing that if you administer a combination of these antibodies, you greatly reduce how much of the virus can escape and infect the host," Scharf says. "So 8ANC195 is one more antibody that we can use therapeutically; it targets a different epitope than other potent antibodies, and it has the advantage of being able to recognize these multiple conformations."

The idea of bNAb therapeutics might not be far from a clinical reality. Scharf says that the same collaborators at Rockefeller University are already testing bNAbs in a human treatment in a clinical trial. Although the initial trial will not include 8ANC195, the antibody may be included in a combination therapy trial in the near future, Scharf says.

Furthermore, the availability of complete information about how 8ANC195 binds to the viral spike will allow Scharf, Bjorkman, and their colleagues to begin engineering the antibody to be more potent and able to recognize more strains of HIV.

"In addition to supporting the use of 8ANC195 for therapeutic applications, our structural studies of 8ANC195 have revealed an unanticipated new conformation of the HIV envelope spike that is relevant to understanding the mechanism by which HIV enters host cells and bNAbs inhibit this process," Bjorkman says.

These results were published in a journal article titled "Broadly Neutralizing Antibody 8ANC195 Recognizes Closed and Open States of HIV-1 Env." In addition to Scharf and Bjorkman, other Caltech coauthors include graduate student Haoqing Wang, research technician Han Gao, research scientist Songye Chen, and Beckman Institute resource director Alasdair W. McDowall. Funding for the work was provided by the National Institute of Allergy and Infectious Diseases of the National Institutes of Health; the Bill and Melinda Gates Foundation; and the American Cancer Society. Crystallography and electron microscopy were done at the Molecular Observatory at Caltech, supported by the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Where to Land Mars 2020: A Conversation with Ken Farley

In August 2015, more than 150 scientists interested in the exploration of Mars attended a conference at a hotel in Arcadia, California, to evaluate 21 potential landing sites for NASA's next Mars rover, a mission called Mars 2020. The design of that mission will be based on that of the Mars Science Laboratory (MSL), including the sky-crane landing system that helped put the rover, Curiosity, safely on martian soil.

Over the course of three days, the scientists heard presentations about the proposed sites and voted on the scientific merit of the locations. In the end, they arrived at a prioritized list of sites that offer the best opportunity for the mission to meet its objectives—including the search for signs of ancient life on the Red Planet and collecting and storing (or "caching") scientifically interesting samples for possible return to Earth.

We recently spoke with Ken Farley, the mission's project scientist and the W.M. Keck Foundation Professor of Geochemistry at Caltech, to talk about the workshop and how the Mars 2020 landing site selection process is shaping up.

 

Can you tell us a little bit about how these workshops help the project select a landing site?

We are using the same basic site selection process that has been used for previous Mars rovers. It involves heavy engagement from the scientific community because there are individual experts on specific sites who are not necessarily on the mission's science team. 

We put out a call for proposals to suggest specific sites, and respondents presented at the workshop. We provided presenters with a one-page template on which to indicate the characteristics of their landing site—basic facts, like what minerals are present. This became a way to distill a presentation into something that you could evaluate objectively and relatively quickly. When people flashed these rubrics up at the end of their presentations, there was some interesting peer review going on in real time.

We went through all 21 sites, talking about what was at each location. In the end, we needed to boil down the input and get a sense of which sites the community was most interested in. So we used a scorecard that tied directly to the mission objectives; there were five criteria, and attendees were able indicate how well they felt each site met each requirement by voting either "low, " "medium, " or "high." Then we tallied up the votes.

 

You mentioned that the criteria on the scorecard were related to the objectives of the mission. What are those objectives?

We have four mission objectives. One is to prepare the way for human exploration of Mars. The rover will have a weather station and an instrument that converts atmospheric carbon dioxide into oxygen—it's called the in situ resource utilization (ISRU) payload. This is a way to make oxygen for both human consumption and, even more importantly, for propellant. In terms of the landing site process, this objective was not a driving factor because the ISRU and the weather station don't really care where they go.

 

And the other three objectives?

We call the three remaining objectives the "ABC" goals. A is to explore the landing site. That's a basic part of a geologic study—you look around and see what's there and try to understand the geologic processes that made it.

The B goal is to explore an "astrobiologically relevant environment," to look for rocks in habitable environments that have the ability to preserve biosignatures— evidence of past or present life—and then to look for biosignatures in those rocks. The phrase that NASA attaches to our mission is "Seeking the Signs of Life." We have a bunch of science instruments on the rover that will help us meet those objectives.

Then the C goal is to prepare a returnable cache of samples. The word "returnable" has a technical definition—the cache has to meet a bunch of criteria, and one is that it has to have enough scientific merit to return. Previous studies of what constitutes returnability have suggested we need a number of samples in the mid 30s—we use the number 37.

 

Why 37?

It may seem strange, but there is a reason for this strange number. Thirty-seven is the maximum number of samples that can be packed into a circular honeycomb inside one possible design of the sample return assembly.

The huge task for us is to be able to drill that many samples. We've learned from MSL that everything takes a long time. Driving takes a long time, drilling takes a long time. We have a very specific mandate that we have to be capable of collecting 20 samples in the prime mission. Collecting at least 20 samples will motivate what we do in designing the rover.

It also has motivated a lot of the discussion of landing sites. You've got to have targets you wish to drill that are close together, and they can't be a long drive from where you land. There also has to be diversity because you don't want 15 copies of the same sample.

 

After all of those factors were considered, what was the outcome of the voting?

What came out of it was an ordered list of eight sites. One interesting thing about that list was that the sites were divided roughly equally into two kinds—those that were crater lakes with deltas and those that we would broadly call hydrothermal sites. These are locations that the community believes are most likely to have ancient life in them and preserve the evidence of it.

It's easy to understand the deltas because if you look in the terrestrial environment, a delta is an excellent place to look for organic matter. The things that are living in the water above the delta and upstream are washed into the delta when they die. Then mud packs in on top and preserves that material.

 

What is interesting about hydrothermal systems?

A hydrothermal system is in some ways very appealing but in some ways risky. These are places where rocks are hot enough to heat water to extremely high temperatures. At hydrothermal vents on Earth's sea floor, you have these strange creatures that are essentially living off chemical energy from inside the planet. And, in fact, the oldest evidence for life on Earth may have been found in hydrothermal settings. The problem is these settings are precarious; when the water gets a little too hot, everything dies.

 

What is the heat source for the hydrothermal sites on Mars?

There are two important heat sources—one is impact and the other is volcanic. A whole collection of our top sites are in a region next to a giant impact crater, and when you look at those rocks, they have chemical and mineralogical characteristics that look like hydrothermal alteration.

A leading candidate of the volcanic type is a site in Gusev Crater called the Columbia Hills site, which the Spirit rover studied. The rover came across a silica deposit. At the time, scientists didn't really know what it was, but it is now thought that the silica is actually a product of volcanic activity called sinter. The presenter for the site showed pictures from Spirit of these little bits of sinter and then showed pictures of something that looks almost exactly the same from a geothermal field in Chile. It was a pretty compelling comparison. Then he went on to show that these environments on Earth are very conducive to life and that the little silica blobs preserve biosignatures well.

So although it would be an interesting decision to invest another mission in the same location, that site was favored because it's the only place where a mineral that might contain signs of ancient life is known to exist with certainty.

 

Do these two types of sites differ just in terms of their ancient environments?

No. It turns out that you can see most of the deltas from Mars's orbit because they are pretty much the last gasp of processing of the martian surface. They date to a period about 3.6 billion years ago when the planet transitioned from a warm, wet period to basically being desiccated. Some of the hydrothermal sites may have rocks that are in the 4-billion-year-old range. That age difference may not sound like much, but in terms of an evolving planet that is dying, it raises interesting questions. If you want to allow the maximum amount of time for life to have evolved, maybe you choose a delta site. On the other hand, you might say, "Mars is dying at that point," and you want to try to get samples that include a record from an earlier, more equable period.

Since the community is divided roughly evenly between these two types of sites, one of the important questions we will have to wrestle with until the next workshop (in early 2017) is, "Which of those kinds of sites is more promising?" We need to engage a bigger community to address this question.

 

What happened to the list generated from this workshop?

This workshop was almost exclusively about science. The mission's leadership and members of the Mars 2020 Landing Site Steering Committee, appointed by NASA, then took the information from the workshop, rolled it up with information that the project had generated on things like whether the sites could be landed on, and came up with a list of eight sites in alphabetic order:

  • Columbia Hills/Gusev
  • Eberswalde
  • Holden
  • Jezero
  • Mawrth Vallis
  • NE Syrtis Major
  • Nili Fossae
  • SW Melas Chasma
     

What comes next?

Over the course of the coming year, the Mars 2020 engineering team will continue its study of the feasibility of the highly ranked landing sites. At the same time, the science team will dig deeply into what is known about each site, seeking to identify the sites that are best suited to meet the mission's science goals. I expect that advocates for specific sites will also continue doing their homework to make the strongest possible case for their preferred site. And in 2017, we'll do the workshop all over again!

Home Page Title: 
Where to Land Mars 2020
Listing Title: 
Where to Land Mars 2020
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Seeing Quantum Motion

Consider the pendulum of a grandfather clock. If you forget to wind it, you will eventually find the pendulum at rest, unmoving. However, this simple observation is only valid at the level of classical physics—the laws and principles that appear to explain the physics of relatively large objects at human scale. However, quantum mechanics, the underlying physical rules that govern the fundamental behavior of matter and light at the atomic scale, state that nothing can quite be completely at rest.

For the first time, a team of Caltech researchers and collaborators has found a way to observe—and control—this quantum motion of an object that is large enough to see. Their results are published in the August 27 online issue of the journal Science.

Researchers have known for years that in classical physics, physical objects indeed can be motionless. Drop a ball into a bowl, and it will roll back and forth a few times. Eventually, however, this motion will be overcome by other forces (such as gravity and friction), and the ball will come to a stop at the bottom of the bowl.

"In the past couple of years, my group and a couple of other groups around the world have learned how to cool the motion of a small micrometer-scale object to produce this state at the bottom, or the quantum ground state," says Keith Schwab, a Caltech professor of applied physics, who led the study. "But we know that even at the quantum ground state, at zero-temperature, very small amplitude fluctuations—or noise—remain."

Because this quantum motion, or noise, is theoretically an intrinsic part of the motion of all objects, Schwab and his colleagues designed a device that would allow them to observe this noise and then manipulate it.

The micrometer-scale device consists of a flexible aluminum plate that sits atop a silicon substrate. The plate is coupled to a superconducting electrical circuit as the plate vibrates at a rate of 3.5 million times per second. According to the laws of classical mechanics, the vibrating structures eventually will come to a complete rest if cooled to the ground state.

But that is not what Schwab and his colleagues observed when they actually cooled the spring to the ground state in their experiments. Instead, the residual energy—quantum noise—remained.

"This energy is part of the quantum description of nature—you just can't get it out," says Schwab. "We all know quantum mechanics explains precisely why electrons behave weirdly. Here, we're applying quantum physics to something that is relatively big, a device that you can see under an optical microscope, and we're seeing the quantum effects in a trillion atoms instead of just one."

Because this noisy quantum motion is always present and cannot be removed, it places a fundamental limit on how precisely one can measure the position of an object.

But that limit, Schwab and his colleagues discovered, is not insurmountable. The researchers and collaborators developed a technique to manipulate the inherent quantum noise and found that it is possible to reduce it periodically. Coauthors Aashish Clerk from McGill University and Florian Marquardt from the Max Planck Institute for the Science of Light proposed a novel method to control the quantum noise, which was expected to reduce it periodically. This technique was then implemented on a micron-scale mechanical device in Schwab's low-temperature laboratory at Caltech.

"There are two main variables that describe the noise or movement," Schwab explains. "We showed that we can actually make the fluctuations of one of the variables smaller—at the expense of making the quantum fluctuations of the other variable larger. That is what's called a quantum squeezed state; we squeezed the noise down in one place, but because of the squeezing, the noise has to squirt out in other places. But as long as those more noisy places aren't where you're obtaining a measurement, it doesn't matter."

The ability to control quantum noise could one day be used to improve the precision of very sensitive measurements, such as those obtained by LIGO, the Laser Interferometry Gravitational-wave Observatory, a Caltech-and-MIT-led project searching for signs of gravitational waves, ripples in the fabric of space-time.

"We've been thinking a lot about using these methods to detect gravitational waves from pulsars—incredibly dense stars that are the mass of our sun compressed into a 10 km radius and spin at 10 to 100 times a second," Schwab says. "In the 1970s, Kip Thorne [Caltech's Richard P. Feynman Professor of Theoretical Physics, Emeritus] and others wrote papers saying that these pulsars should be emitting gravity waves that are nearly perfectly periodic, so we're thinking hard about how to use these techniques on a gram-scale object to reduce quantum noise in detectors, thus increasing the sensitivity to pick up on those gravity waves," Schwab says.

In order to do that, the current device would have to be scaled up. "Our work aims to detect quantum mechanics at bigger and bigger scales, and one day, our hope is that this will eventually start touching on something as big as gravitational waves," he says.

These results were published in an article titled, "Quantum squeezing of motion in a mechanical resonator." In addition to Schwab, Clerk, and Marquardt, other coauthors include former graduate student Emma E. Wollman (PhD '15); graduate students Chan U. Lei and Ari J. Weinstein; former postdoctoral scholar Junho Suh; and Andreas Kronwald of Friedrich-Alexander-Universität in Erlangen, Germany. The work was funded by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency, and the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center that also has support from the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Why Did Western Europe Dominate the Globe?

Although Europe represents only about 8 percent of the planet's landmass, from 1492 to 1914, Europeans conquered or colonized more than 80 percent of the entire world. Being dominated for centuries has led to lingering inequality and long-lasting effects in many formerly colonized countries, including poverty and slow economic growth. There are many possible explanations for why history played out this way, but few can explain why the West was so powerful for so long.

Caltech's Philip Hoffman, the Rea A. and Lela G. Axline Professor of Business Economics and professor of history, has a new explanation: the advancement of gunpowder technology. The Chinese invented gunpowder, but Hoffman, whose work applies economic theory to historical contexts, argues that certain political and economic circumstances allowed the Europeans to advance gunpowder technology at an unprecedented rate—allowing a relatively small number of people to quickly take over much of the rest of the globe.

Hoffman's work is published in a new book titled Why Did Europe Conquer the World? We spoke with him recently about his research interests and what led him to study this particular topic.
 

You have been on the Caltech faculty for more than 30 years. Are there any overarching themes to your work?

Over the years I've been interested in a number of different things, and this new work puts together a lot of bits of my research. I've looked at changes in technology that influence agriculture, and I've studied the development of financial markets, and in between those two, I was also studying why financial crises occur. I've also been interested in the development of tax systems. For example, how did states get the ability to impose heavy taxes? What were the politics and the political context of the economy that resulted in this ability to tax?
 

What led you to investigate the global conquests of western Europe?

It's just fascinating. In 1914, really only China, Japan, and the Ottoman Empire had escaped becoming European colonies. A thousand years ago, no one would have ever expected that result, for at that point western Europe was hopelessly backward. It was politically weak, it was poor, and the major long-distance commerce was a slave trade led by Vikings. The political dominance of western Europe was an unexpected outcome and had really big consequences, so I thought: let's explain it.

 

Many theories purport to explain how the West became dominant. For example, that Europe became industrialized more quickly and therefore became wealthier than the rest of the world. Or, that when Europeans began to travel the world, people in other countries did not have the immunity to fight off the diseases they brought with them. How is your theory different?

Yes, there are lots of conventional explanations—industrialization, for example—but on closer inspection they all fall apart. Before 1800, Europe had already taken over at least 35 percent of the world, but Britain was just beginning to industrialize. The rest of Europe at that time was really no wealthier than China, the Middle East, or South Asia. So as an explanation, industrialization doesn't work.

Another explanation, described in Jared Diamond's famous book [Guns, Germs, and Steel: The Fates of Human Societies], is disease. But something like the smallpox epidemic that ravaged Mexico when the Spanish conquistador Hernán Cortés overthrew the Aztec Empire just isn't the whole story of Cortés's victory or of Europe's successful colonization of other parts of the world. Disease can't explain, for example, the colonization of India, because people in southeast Asia had the same immunity to disease that the Europeans did. So that's not the answer—it's something else.

 

What made you turn to the idea of gunpowder technology as an explanation?

It started after I gave an undergraduate here a book to read about gunpowder technology, how it was invented in China and used in Japan and Southeast Asia, and how the Europeans got very good at using it, which fed into their successful conquests. I'd given it to him because the use of this technology is related to politics and fiscal systems and taxes, and as he was reading it, he noted that the book did not give the ultimate cause of why Europe in particular was so successful. That was a really great question and it got me interested.

 

What was so special about gunpowder?

Gunpowder was really important for conquering territory; it allows a small number of people to exercise a lot of influence. The technology grew to include more than just guns: armed ships, fortifications that can resist artillery, and more, and the Europeans became the best at using these things.

So, I put together an economic model of how this technology has advanced to come up with what I think is the real reason why the West conquered almost everyone else. My idea incorporates the model of a contest or a tournament where your odds of winning are higher if you spend more resources on fighting. You can think of that as being much like a baseball team that hires better players to win more games, but in this case, instead of coaches, it's political leaders and instead of games there are wars. And the more that the political leaders spend, the better their chances of defeating other leaders and, in the long run, of dominating the other cultures.

 

What kinds of factors are included in this model?

One big factor that's important to the advancement of any defense technology is how much money a political leader can spend. That comes down to the political costs of raising revenue and a leader's ability to tax. In the very successful countries, the leaders could impose very heavy taxes and spend huge sums on war.

The economic model then connected that spending to changes in military technology. The spending on war gave leaders a chance to try out new weapons, new armed ships, and new tactics, and to learn from mistakes on the battlefield. The more they spent, the more chances they had to improve their military technology through trial and error while fighting wars. So more spending would not only mean greater odds of victory over an enemy, but more rapid change in military technology.

If you think about it, you realize that advancements in gunpowder technology—which are important for conquest—arise where political leaders fight using that technology, where they spend huge sums on it, and where they're able to share the resulting advances in that technology. For example, if I am fighting you and you figure out a better way to build an armed ship, I can imitate you. For that to happen, the countries have to be small and close to one another. And all of this describes Europe.

 

What does this mean in a modern context?

One lesson the book teaches is that actions involving war, foreign policy, and military spending can have big, long-lasting consequences: this is a lesson that policy makers should never forget. The book also reminds us that in a world where there are hostile powers, we really don't want to get rid of spending on improving military technology. Those improvements can help at times when wars are necessary—for instance, when we are fighting against enemies with whom we cannot negotiate. Such enemies existed in the past—they were fighting for glory on the battlefield or victory over an enemy of the faith—and one could argue that they pose a threat today as well.

Things are much better if the conflict concerns something that can be split up—such as money or land. Then you can bargain with your enemies to divvy up whatever you disagree about and you can have something like peace. You'll still need to back up the peace with armed forces, but you won't actually fight all that much, and that's a much better outcome.

In either case, you'll still be spending money on the military and on military research. Personally, I would much rather see expenditures devoted to infrastructure, or scientific research, or free preschool for everybody—things that would carry big economic benefits—but in this world, I don't think you can stop doing military research or spending money on the military. I wish we did live in that world, but unfortunately it's not realistic.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Artificial Leaf Harnesses Sunlight for Efficient Fuel Production

Generating and storing renewable energy, such as solar or wind power, is a key barrier to a clean-energy economy. When the Joint Center for Artificial Photosynthesis (JCAP) was established at Caltech and its partnering institutions in 2010, the U.S. Department of Energy (DOE) Energy Innovation Hub had one main goal: a cost-effective method of producing fuels using only sunlight, water, and carbon dioxide, mimicking the natural process of photosynthesis in plants and storing energy in the form of chemical fuels for use on demand. Over the past five years, researchers at JCAP have made major advances toward this goal, and they now report the development of the first complete, efficient, safe, integrated solar-driven system for splitting water to create hydrogen fuels.

"This result was a stretch project milestone for the entire five years of JCAP as a whole, and not only have we achieved this goal, we also achieved it on time and on budget," says Caltech's Nate Lewis, George L. Argyros Professor and professor of chemistry, and the JCAP scientific director.

The new solar fuel generation system, or artificial leaf, is described in the August 27 online issue of the journal Energy and Environmental Science. The work was done by researchers in the laboratories of Lewis and Harry Atwater, director of JCAP and Howard Hughes Professor of Applied Physics and Materials Science.

"This accomplishment drew on the knowledge, insights and capabilities of JCAP, which illustrates what can be achieved in a Hub-scale effort by an integrated team," Atwater says. "The device reported here grew out of a multi-year, large-scale effort to define the design and materials components needed for an integrated solar fuels generator."


Solar Fuels Prototype in Operation
A fully integrated photoelectrochemical device performing unassisted solar water splitting for the production of hydrogen fuel. Credit: Erik Verlage and Chengxiang Xiang/Caltech

The new system consists of three main components: two electrodes—one photoanode and one photocathode—and a membrane. The photoanode uses sunlight to oxidize water molecules, generating protons and electrons as well as oxygen gas. The photocathode recombines the protons and electrons to form hydrogen gas. A key part of the JCAP design is the plastic membrane, which keeps the oxygen and hydrogen gases separate. If the two gases are allowed to mix and are accidentally ignited, an explosion can occur; the membrane lets the hydrogen fuel be separately collected under pressure and safely pushed into a pipeline.

Semiconductors such as silicon or gallium arsenide absorb light efficiently and are therefore used in solar panels. However, these materials also oxidize (or rust) on the surface when exposed to water, so cannot be used to directly generate fuel. A major advance that allowed the integrated system to be developed was previous work in Lewis's laboratory, which showed that adding a nanometers-thick layer of titanium dioxide (TiO2)—a material found in white paint and many toothpastes and sunscreens—onto the electrodes could prevent them from corroding while still allowing light and electrons to pass through. The new complete solar fuel generation system developed by Lewis and colleagues uses such a 62.5-nanometer-thick TiO2 layer to effectively prevent corrosion and improve the stability of a gallium arsenide–based photoelectrode.

Another key advance is the use of active, inexpensive catalysts for fuel production. The photoanode requires a catalyst to drive the essential water-splitting reaction. Rare and expensive metals such as platinum can serve as effective catalysts, but in its work the team discovered that it could create a much cheaper, active catalyst by adding a 2-nanometer-thick layer of nickel to the surface of the TiO2. This catalyst is among the most active known catalysts for splitting water molecules into oxygen, protons, and electrons and is a key to the high efficiency displayed by the device.

The photoanode was grown onto a photocathode, which also contains a highly active, inexpensive, nickel-molybdenum catalyst, to create a fully integrated single material that serves as a complete solar-driven water-splitting system.

A critical component that contributes to the efficiency and safety of the new system is the special plastic membrane that separates the gases and prevents the possibility of an explosion, while still allowing the ions to flow seamlessly to complete the electrical circuit in the cell. All of the components are stable under the same conditions and work together to produce a high-performance, fully integrated system. The demonstration system is approximately one square centimeter in area, converts 10 percent of the energy in sunlight into stored energy in the chemical fuel, and can operate for more than 40 hours continuously.

"This new system shatters all of the combined safety, performance, and stability records for artificial leaf technology by factors of 5 to 10 or more ," Lewis says.

"Our work shows that it is indeed possible to produce fuels from sunlight safely and efficiently in an integrated system with inexpensive components," Lewis adds, "Of course, we still have work to do to extend the lifetime of the system and to develop methods for cost-effectively manufacturing full systems, both of which are in progress."

Because the work assembled various components that were developed by multiple teams within JCAP, coauthor Chengxiang Xiang, who is co-leader of the JCAP prototyping and scale-up project, says that the successful end result was a collaborative effort. "JCAP's research and development in device design, simulation, and materials discovery and integration all funneled into the demonstration of this new device," Xiang says.

These results are published in a paper titled "A monolithically integrated, intrinsically safe, 10% efficient, solar-driven water-splitting system based on active, stable earth-abundant electrocatalysts in conjunction with tandem III-V light absorbers protected by amorphous TiO2 films." In addition to Lewis, Atwater, and Xiang, other Caltech coauthors include graduate student Erik Verlage, postdoctoral scholars Shu Hu and Ke Sun, material processing and integration research engineer Rui Liu, and JCAP mechanical engineer Ryan Jones. Funding was provided by the Office of Science at the U.S. Department of Energy, and the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech Chemists Solve Major Piece of Cellular Mystery

Team determines the architecture of a second subcomplex of the nuclear pore complex

Not just anything is allowed to enter the nucleus, the heart of eukaryotic cells where, among other things, genetic information is stored. A double membrane, called the nuclear envelope, serves as a wall, protecting the contents of the nucleus. Any molecules trying to enter or exit the nucleus must do so via a cellular gatekeeper known as the nuclear pore complex (NPC), or pore, that exists within the envelope.

How can the NPC be such an effective gatekeeper—preventing much from entering the nucleus while helping to shuttle certain molecules across the nuclear envelope? Scientists have been trying to figure that out for decades, at least in part because the NPC is targeted by a number of diseases, including some aggressive forms of leukemia and nervous system disorders such as a hereditary form of Lou Gehrig's disease. Now a team led by André Hoelz, assistant professor of biochemistry at Caltech, has solved a crucial piece of the puzzle.

In February of this year, Hoelz and his colleagues published a paper describing the atomic structure of the NPC's coat nucleoporin complex, a subcomplex that forms what they now call the outer rings (see illustration). Building on that work, the team has now solved the architecture of the pore's inner ring, a subcomplex that is central to the NPC's ability to serve as a barrier and transport facilitator. In order to the determine that architecture, which determines how the ring's proteins interact with each other, the biochemists built up the complex in a test tube and then systematically dissected it to understand the individual interactions between components. Then they validated that this is actually how it works in vivo, in a species of fungus.

For more than a decade, other researchers have suggested that the inner ring is highly flexible and expands to allow large macromolecules to pass through. "People have proposed some complicated models to explain how this might happen," says Hoelz. But now he and his colleagues have shown that these models are incorrect and that these dilations simply do not occur.

"Using an interdisciplinary approach, we solved the architecture of this subcomplex and showed that it cannot change shape significantly," says Hoelz. "It is a relatively rigid scaffold that is incorporated into the pore and basically just sits as a decoration, like pom-poms on a bicycle. It cannot dilate."

The new paper appears online ahead of print on August 27 in Science Express. The four co-lead authors on the paper are Caltech postdoctoral scholars Tobias Stuwe, Christopher J. Bley, and Karsten Thierbach, and graduate student Stefan Petrovic.


Crystal Structure of Fungal Channel Nucleoporin Complex
This video features a rotating three-dimensional crystal structure of the fungal channel nucleoporin complex bound to the adaptor nucleoporin Nic96. This interaction is the complex's sole site of attachment to the rest of the inner ring of the NPC. The channel nucleoporin complex borders the central transport channel and fills it with filamentous structures (phenylalanine-glycine repeats) that form a diffusion barrier and provide docking sites for proteins that ferry molecules across the nuclear envelope. Credit: Andre Hoelz/Caltech and Science

Together, the inner and outer rings make up the symmetric core of the NPC, a structure that includes 21 different proteins. The symmetric core is so named because of its radial symmetry (the two remaining subcomplexes of the NPC are specific to either the side that faces the cell's cytoplasm or the side that faces the nucleus and are therefore not symmetric). Having previously solved the structure of the coat nucleoporin complex and located it in the outer rings, the researchers knew that the remaining components that are not membrane anchored must make up the inner ring.

They started solving the architecture by focusing on the channel nucleoporin complex, or channel, which lines the central transport channel and is made up of three proteins, accounting for about half of the inner ring. This complex produces filamentous structures that serve as docking sites for specific proteins that ferry molecules across the nuclear envelope.

The biochemists employed bacteria to make the proteins associated with the inner ring in a test tube and mixed various combinations until they built the entire subcomplex. Once they had reconstituted the inner ring subcomplex, they were able to modify it to investigate how it is held together and which of its components are critical, and to determine how the channel is attached to the rest of the pore.

Hoelz and his team found that the channel is attached at only one site. This means that it cannot stretch significantly because such shape changes require multiple attachment points. Hoelz notes that a new electron microscopy study of the NPC published in 2013 by Martin Beck's group at the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany, indicated that the central channel is bigger than previously thought and wide enough to accommodate even the largest cargoes known to pass through the pore.

When the researchers introduced mutations that effectively eliminated the channel's single attachment, the complex could no longer be incorporated into the inner ring. After proving this in the test tube, they also showed this to be true in living cells.

"This whole complex is a very complicated machine to assemble. The cool thing here is that nature has found an elegant way to wait until the very end of the assembly of the nuclear pore to incorporate the channel," says Hoelz. "By incorporating the channel, you establish two things at once: you immediately form a barrier and you generate the ability for regulated transport to occur through the pore." Prior to the channel's incorporation, there is simply a hole through which macromolecules can freely pass.

Next, Hoelz and his colleagues used X-ray crystallography to determine the structure of the channel nucleoporin subcomplex bound to the adaptor nucleoporin Nic96, which is its only nuclear pore attachment site. X-ray crystallography involves shining X-rays on a crystallized sample and analyzing the pattern of rays reflected off the atoms in the crystal. Because the NPC is a large and complex molecular machine that also has many moving parts, they used an engineered antibody to essentially "superglue" many copies of the complex into place to form a nicely ordered crystalline sample. Then they analyzed hundreds of samples using Caltech's Molecular Observatory—a facility developed with support from the Gordon and Betty Moore Foundation that includes an automated X-ray beam line at the Stanford Synchrotron Radiation Laboratory that can be controlled remotely from Caltech—and the GM/CA beam line at the Advanced Photon Source at the Argonne National Laboratory. Eventually, they were able to determine the size, shape, and position of all the atoms of the channel nucleoporin subcomplex and its location within the full NPC.

"The crystal structure nailed it," Hoelz says. "There is no way that the channel is changing shape. All of that other work that, for more than 10 years, suggested it was dilating was wrong."

The researchers also solved a number of crystal structures from other parts of the NPC and determined how they interact with components of the inner ring. In doing so they demonstrated that one such interaction is critical for positioning the channel in the center of the inner ring. They found that exact positioning is needed for the proper export from the nucleus of mRNA and components of ribosomes, the cell's protein-making complexes, rendering it critical in the flow of genetic information from DNA to mRNA to protein.

Hoelz adds that now that the architectures of the inner and outer rings of the NPC are known, getting an atomic structure of the entire symmetric core is "a sprint to the summit."

"When I started at Caltech, I thought it might take another 10, 20 years to do this," he says. "In the end, we have really only been working on this for four and a half years, and the thing is basically tackled. I want to emphasize that this kind of work is not doable everywhere. The people who worked on this are truly special, talented, and smart; and they worked day and night on this for years."

Ultimately, Hoelz says he would like to understand how the NPC works in great detail so that he might be able to generate therapies for diseases associated with the dysfunction of the complex. He also dreams of building up an entire pore in the test tube so that he can fully study it and understand what happens as it is modified in various ways. "Just as they did previously when I said that I wanted to solve the atomic structure of the nuclear pore, people will say that I'm crazy for trying to do this," he says. "But if we don't do it, it is likely that nobody else will."

The paper, "Architecture of the fungal nuclear pore inner ring complex," had a number of additional Caltech authors: Sandra Schilbach (now of the Max Planck Institute of Biophysical Chemistry), Daniel J. Mayo, Thibaud Perriches, Emily J. Rundlet, Young E. Jeon, Leslie N. Collins, Ferdinand M. Huber, and Daniel H. Lin. Additional coauthors include Marcin Paduch, Akiko Koide, Vincent Lu, Shohei Koide, and Anthony A. Kossiakoff of the University of Chicago; and Jessica Fischer and Ed Hurt of Heidelberg University.

 

 

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

After a Half Century, the Exotic Pentaquark Particle is Found

In July, scientists at the Large Hadron Collider (LHC) reported the discovery of the pentaquark, a long-sought particle first predicted to exist in the 1960s as a consequence of the theory of elementary particles and their interactions proposed by Murray Gell-Mann, Caltech's Robert Andrews Millikan Professor of Theoretical Physics, Emeritus.

In work for which he won the Nobel Prize in Physics in 1969, Gell-Mann introduced the concept of the quark—a fundamental building block of matter. Quarks come in six types, known as "flavors": up, down, top, bottom, strange, and charm. As described in his model, groups of quarks combine into composite particles called hadrons. Combining a quark and an antiquark (a quark's antimatter equivalent) creates a type of hadron called a meson, while baryons are hadrons composed of three quarks. Protons, for example, have two up quarks and one down quark, while neutrons have one up and two down quarks. Gell-Mann's scheme also allowed for more exotic forms of composite particles, including tetraquarks, made of four quarks, and the pentaquark, consisting of four quarks and an antiquark.

The pentaquark was detected at the LHC—the most powerful particle accelerator on Earth—by scientists carrying out the "beauty" experiment, or LHCb. The LHC accelerates protons around a ring almost five miles wide to nearly the speed of light, producing two proton beams that careen toward each other. A small fraction of the protons collide, creating other particles in the process. During investigations of the behavior of one such particle, an unstable three-quark object known as the bottom lambda baryon that decays quickly once formed, LHCb researchers observed unusually heavy objects, each with about 4.5 times the mass of a proton. After further analysis, the researchers concluded that the objects were pentaquarks composed of two up quarks, one down quark, one charm quark, and one anticharm quark. A paper describing the discovery has been published in the journal Physical Review Letters.

It is thought that pentaquarks and other exotic particles may form naturally in violent environments such as exploding stars and would have been created during the Big Bang. A better understanding of these complex arrangements of quarks could offer insight into the forces that hold together all matter as well as the earliest moments of the universe.

"This is part of a long process of discovery of particle states," said Gell-Mann in a statement released by the Santa Fe Institute, where he currently is a Distinguished Fellow. "[In the future] they may find more and more of them, made of quarks and antiquarks and various combinations."

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech Announces Discovery in Fundamental Physics

When the transistor was invented in 1947 at Bell Labs, few could have foreseen the future impact of the device. This fundamental development in science and engineering was critical to the invention of handheld radios, led to modern computing, and enabled technologies such as the smartphone. This is one of the values of basic research.

In a similar fashion, a branch of fundamental physics research, the study of so-called correlated electrons, focuses on interactions between the electrons in metals.

The key to understanding these interactions and the unique properties they produce—information that could lead to the development of novel materials and technologies—is to experimentally verify their presence and physically probe the interactions at microscopic scales. To this end, Caltech's Thomas F. Rosenbaum and colleagues at the University of Chicago and the Argonne National Laboratory recently used a synchrotron X-ray source to investigate the existence of instabilities in the arrangement of the electrons in metals as a function of both temperature and pressure, and to pinpoint, for the first time, how those instabilities arise. Rosenbaum, professor of physics and holder of the Sonja and William Davidow Presidential Chair, is the corresponding author on the paper that was published on July 27, 2015, in the journal Nature Physics.

"We spent over 10 years developing the instrumentation to perform these studies," says Yejun Feng of Argonne National Laboratory, a coauthor of the paper. "We now have a very unique capability that's due to the long-term relationship between Dr. Rosenbaum and the facilities at the Argonne National Laboratory."

Within atoms, electrons are organized into orbital shells and subshells. Although they are often depicted as physical entities, orbitals actually represent probability distributions—regions of space where electrons have a certain likelihood of being found in a particular element at a particular energy. The characteristic electron configuration of a given element explains that element's peculiar properties.

The work in correlated electrons looks at a subset of electrons. Metals, as an example, have an unfilled outermost orbital and electrons are free to move from atom to atom. Thus, metals are good electrical conductors. When metal atoms are tightly packed into lattices (or crystals) these electrons mingle together into a "sea" of electrons. The metallic element mercury is liquid at room temperature, in part due to its electron configuration, and shows very little resistance to electric current due to its electron configuration. At 4 degrees above absolute zero (just barely above -460 degrees Fahrenheit), mercury's electron arrangement and other properties create communal electrons that show no resistance to electric current, a state known as superconductivity.

Mercury's superconductivity and similar phenomena are due to the existence of many pairs of correlated electrons. In superconducting states, correlated electrons pair to form an elastic, collective state through an excitation in the crystal lattice known as a phonon (specifically, a periodic, collective excitation of the atoms). The electrons are then able to move cooperatively in the elastic state through a material without energy loss.

Electrons in crystals can interact in many ways with the periodic structure of the underlying atoms. Sometimes the electrons modulate themselves periodically in space. The question then arises as to whether this "charge order" derives from the interactions of the electrons with the atoms, a theory first proposed more than 60 years ago, or solely from interactions among the sea of electrons themselves. This question was the focus of the Nature Physics study. Electrons also behave as microscopic magnets and can demonstrate "spin order," which raises similar questions about the origin of the local magnetism.

To see where the charge order arises, the researchers turned to the Advanced Photon Source at Argonne. The Photon Source is a synchrotron (a relative of the cyclotron, commonly known as an "atom-smasher"). These machines generate intense X-ray beams that can be used for X-ray diffraction studies. In X-ray diffraction, the patterns of scattered X-rays are used to provide information about repeating structures with wavelengths at the atomic scale.

In the experiment, the researchers used the X-ray beams to investigate charge-order effects in two metals, chromium and niobium diselenide, at pressures ranging from 0 (a vacuum) to 100 kilobar (100,000 times normal atmospheric pressure) and at temperatures ranging from 3 to 300 K (or -454 to 80 degrees Fahrenheit). Niobium diselenide was selected because it has a high degree of charge order, while chromium, in contrast, has a high degree of spin order. 

The researchers found that there is a simple correlation between pressure and how the communal electrons organize themselves within the crystal. Materials with completely different types of crystal structures all behave similarly. "These sorts of charge- and spin-order phenomena have been known for a long time, but their underlying mechanisms have not been understood until now," says Rosenbaum.

Paper coauthors Jasper van Wezel, formerly of Argonne National Laboratory and presently of the Institute for Theoretical Physics at the University of Amsterdam, and Peter Littlewood, a professor at the University of Chicago and the director of Argonne National Laboratory, helped to provide a new theoretical perspective to explain the experimental results.

Rosenbaum and colleagues point out that there are no immediate practical applications of the results. However, Rosenbaum notes, "This work should have applicability to new materials as well as to the kind of interactions that are useful to create magnetic states that are often the antecedents of superconductors," says Rosenbaum.

"The attraction of this sort of research is to ask fundamental questions that are ubiquitous in nature," says Rosenbaum. "I think it is very much a Caltech tradition to try to develop new tools that can interrogate materials in ways that illuminate the fundamental aspects of the problem." He adds, "There is real power in being able to have general microscopic insights to develop the most powerful breakthroughs."

The coauthors on the paper, titled "Itinerant density wave instabilities at classical and quantum critical points," are Yejun Feng and Peter Littlewood of the Argonne National Laboratory, Jasper van Wezel of the University of Amsterdam, Daniel M. Silevitch and Jiyang Wang of the University of Chicago, and Felix Flicker of the University of Bristol. Work performed at the Argonne National Laboratory was supported by the U.S. Department of Energy. Work performed at the University of Chicago was funded by the National Science Foundation. Additional support was received from the Netherlands Organization for Scientific Research.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages