Seeing Quantum Motion

Consider the pendulum of a grandfather clock. If you forget to wind it, you will eventually find the pendulum at rest, unmoving. However, this simple observation is only valid at the level of classical physics—the laws and principles that appear to explain the physics of relatively large objects at human scale. However, quantum mechanics, the underlying physical rules that govern the fundamental behavior of matter and light at the atomic scale, state that nothing can quite be completely at rest.

For the first time, a team of Caltech researchers and collaborators has found a way to observe—and control—this quantum motion of an object that is large enough to see. Their results are published in the August 27 online issue of the journal Science.

Researchers have known for years that in classical physics, physical objects indeed can be motionless. Drop a ball into a bowl, and it will roll back and forth a few times. Eventually, however, this motion will be overcome by other forces (such as gravity and friction), and the ball will come to a stop at the bottom of the bowl.

"In the past couple of years, my group and a couple of other groups around the world have learned how to cool the motion of a small micrometer-scale object to produce this state at the bottom, or the quantum ground state," says Keith Schwab, a Caltech professor of applied physics, who led the study. "But we know that even at the quantum ground state, at zero-temperature, very small amplitude fluctuations—or noise—remain."

Because this quantum motion, or noise, is theoretically an intrinsic part of the motion of all objects, Schwab and his colleagues designed a device that would allow them to observe this noise and then manipulate it.

The micrometer-scale device consists of a flexible aluminum plate that sits atop a silicon substrate. The plate is coupled to a superconducting electrical circuit as the plate vibrates at a rate of 3.5 million times per second. According to the laws of classical mechanics, the vibrating structures eventually will come to a complete rest if cooled to the ground state.

But that is not what Schwab and his colleagues observed when they actually cooled the spring to the ground state in their experiments. Instead, the residual energy—quantum noise—remained.

"This energy is part of the quantum description of nature—you just can't get it out," says Schwab. "We all know quantum mechanics explains precisely why electrons behave weirdly. Here, we're applying quantum physics to something that is relatively big, a device that you can see under an optical microscope, and we're seeing the quantum effects in a trillion atoms instead of just one."

Because this noisy quantum motion is always present and cannot be removed, it places a fundamental limit on how precisely one can measure the position of an object.

But that limit, Schwab and his colleagues discovered, is not insurmountable. The researchers and collaborators developed a technique to manipulate the inherent quantum noise and found that it is possible to reduce it periodically. Coauthors Aashish Clerk from McGill University and Florian Marquardt from the Max Planck Institute for the Science of Light proposed a novel method to control the quantum noise, which was expected to reduce it periodically. This technique was then implemented on a micron-scale mechanical device in Schwab's low-temperature laboratory at Caltech.

"There are two main variables that describe the noise or movement," Schwab explains. "We showed that we can actually make the fluctuations of one of the variables smaller—at the expense of making the quantum fluctuations of the other variable larger. That is what's called a quantum squeezed state; we squeezed the noise down in one place, but because of the squeezing, the noise has to squirt out in other places. But as long as those more noisy places aren't where you're obtaining a measurement, it doesn't matter."

The ability to control quantum noise could one day be used to improve the precision of very sensitive measurements, such as those obtained by LIGO, the Laser Interferometry Gravitational-wave Observatory, a Caltech-and-MIT-led project searching for signs of gravitational waves, ripples in the fabric of space-time.

"We've been thinking a lot about using these methods to detect gravitational waves from pulsars—incredibly dense stars that are the mass of our sun compressed into a 10 km radius and spin at 10 to 100 times a second," Schwab says. "In the 1970s, Kip Thorne [Caltech's Richard P. Feynman Professor of Theoretical Physics, Emeritus] and others wrote papers saying that these pulsars should be emitting gravity waves that are nearly perfectly periodic, so we're thinking hard about how to use these techniques on a gram-scale object to reduce quantum noise in detectors, thus increasing the sensitivity to pick up on those gravity waves," Schwab says.

In order to do that, the current device would have to be scaled up. "Our work aims to detect quantum mechanics at bigger and bigger scales, and one day, our hope is that this will eventually start touching on something as big as gravitational waves," he says.

These results were published in an article titled, "Quantum squeezing of motion in a mechanical resonator." In addition to Schwab, Clerk, and Marquardt, other coauthors include former graduate student Emma E. Wollman (PhD '15); graduate students Chan U. Lei and Ari J. Weinstein; former postdoctoral scholar Junho Suh; and Andreas Kronwald of Friedrich-Alexander-Universität in Erlangen, Germany. The work was funded by the National Science Foundation (NSF), the Defense Advanced Research Projects Agency, and the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center that also has support from the Gordon and Betty Moore Foundation.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Why Did Western Europe Dominate the Globe?

Although Europe represents only about 8 percent of the planet's landmass, from 1492 to 1914, Europeans conquered or colonized more than 80 percent of the entire world. Being dominated for centuries has led to lingering inequality and long-lasting effects in many formerly colonized countries, including poverty and slow economic growth. There are many possible explanations for why history played out this way, but few can explain why the West was so powerful for so long.

Caltech's Philip Hoffman, the Rea A. and Lela G. Axline Professor of Business Economics and professor of history, has a new explanation: the advancement of gunpowder technology. The Chinese invented gunpowder, but Hoffman, whose work applies economic theory to historical contexts, argues that certain political and economic circumstances allowed the Europeans to advance gunpowder technology at an unprecedented rate—allowing a relatively small number of people to quickly take over much of the rest of the globe.

Hoffman's work is published in a new book titled Why Did Europe Conquer the World? We spoke with him recently about his research interests and what led him to study this particular topic.
 

You have been on the Caltech faculty for more than 30 years. Are there any overarching themes to your work?

Over the years I've been interested in a number of different things, and this new work puts together a lot of bits of my research. I've looked at changes in technology that influence agriculture, and I've studied the development of financial markets, and in between those two, I was also studying why financial crises occur. I've also been interested in the development of tax systems. For example, how did states get the ability to impose heavy taxes? What were the politics and the political context of the economy that resulted in this ability to tax?
 

What led you to investigate the global conquests of western Europe?

It's just fascinating. In 1914, really only China, Japan, and the Ottoman Empire had escaped becoming European colonies. A thousand years ago, no one would have ever expected that result, for at that point western Europe was hopelessly backward. It was politically weak, it was poor, and the major long-distance commerce was a slave trade led by Vikings. The political dominance of western Europe was an unexpected outcome and had really big consequences, so I thought: let's explain it.

 

Many theories purport to explain how the West became dominant. For example, that Europe became industrialized more quickly and therefore became wealthier than the rest of the world. Or, that when Europeans began to travel the world, people in other countries did not have the immunity to fight off the diseases they brought with them. How is your theory different?

Yes, there are lots of conventional explanations—industrialization, for example—but on closer inspection they all fall apart. Before 1800, Europe had already taken over at least 35 percent of the world, but Britain was just beginning to industrialize. The rest of Europe at that time was really no wealthier than China, the Middle East, or South Asia. So as an explanation, industrialization doesn't work.

Another explanation, described in Jared Diamond's famous book [Guns, Germs, and Steel: The Fates of Human Societies], is disease. But something like the smallpox epidemic that ravaged Mexico when the Spanish conquistador Hernán Cortés overthrew the Aztec Empire just isn't the whole story of Cortés's victory or of Europe's successful colonization of other parts of the world. Disease can't explain, for example, the colonization of India, because people in southeast Asia had the same immunity to disease that the Europeans did. So that's not the answer—it's something else.

 

What made you turn to the idea of gunpowder technology as an explanation?

It started after I gave an undergraduate here a book to read about gunpowder technology, how it was invented in China and used in Japan and Southeast Asia, and how the Europeans got very good at using it, which fed into their successful conquests. I'd given it to him because the use of this technology is related to politics and fiscal systems and taxes, and as he was reading it, he noted that the book did not give the ultimate cause of why Europe in particular was so successful. That was a really great question and it got me interested.

 

What was so special about gunpowder?

Gunpowder was really important for conquering territory; it allows a small number of people to exercise a lot of influence. The technology grew to include more than just guns: armed ships, fortifications that can resist artillery, and more, and the Europeans became the best at using these things.

So, I put together an economic model of how this technology has advanced to come up with what I think is the real reason why the West conquered almost everyone else. My idea incorporates the model of a contest or a tournament where your odds of winning are higher if you spend more resources on fighting. You can think of that as being much like a baseball team that hires better players to win more games, but in this case, instead of coaches, it's political leaders and instead of games there are wars. And the more that the political leaders spend, the better their chances of defeating other leaders and, in the long run, of dominating the other cultures.

 

What kinds of factors are included in this model?

One big factor that's important to the advancement of any defense technology is how much money a political leader can spend. That comes down to the political costs of raising revenue and a leader's ability to tax. In the very successful countries, the leaders could impose very heavy taxes and spend huge sums on war.

The economic model then connected that spending to changes in military technology. The spending on war gave leaders a chance to try out new weapons, new armed ships, and new tactics, and to learn from mistakes on the battlefield. The more they spent, the more chances they had to improve their military technology through trial and error while fighting wars. So more spending would not only mean greater odds of victory over an enemy, but more rapid change in military technology.

If you think about it, you realize that advancements in gunpowder technology—which are important for conquest—arise where political leaders fight using that technology, where they spend huge sums on it, and where they're able to share the resulting advances in that technology. For example, if I am fighting you and you figure out a better way to build an armed ship, I can imitate you. For that to happen, the countries have to be small and close to one another. And all of this describes Europe.

 

What does this mean in a modern context?

One lesson the book teaches is that actions involving war, foreign policy, and military spending can have big, long-lasting consequences: this is a lesson that policy makers should never forget. The book also reminds us that in a world where there are hostile powers, we really don't want to get rid of spending on improving military technology. Those improvements can help at times when wars are necessary—for instance, when we are fighting against enemies with whom we cannot negotiate. Such enemies existed in the past—they were fighting for glory on the battlefield or victory over an enemy of the faith—and one could argue that they pose a threat today as well.

Things are much better if the conflict concerns something that can be split up—such as money or land. Then you can bargain with your enemies to divvy up whatever you disagree about and you can have something like peace. You'll still need to back up the peace with armed forces, but you won't actually fight all that much, and that's a much better outcome.

In either case, you'll still be spending money on the military and on military research. Personally, I would much rather see expenditures devoted to infrastructure, or scientific research, or free preschool for everybody—things that would carry big economic benefits—but in this world, I don't think you can stop doing military research or spending money on the military. I wish we did live in that world, but unfortunately it's not realistic.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Artificial Leaf Harnesses Sunlight for Efficient Fuel Production

Generating and storing renewable energy, such as solar or wind power, is a key barrier to a clean-energy economy. When the Joint Center for Artificial Photosynthesis (JCAP) was established at Caltech and its partnering institutions in 2010, the U.S. Department of Energy (DOE) Energy Innovation Hub had one main goal: a cost-effective method of producing fuels using only sunlight, water, and carbon dioxide, mimicking the natural process of photosynthesis in plants and storing energy in the form of chemical fuels for use on demand. Over the past five years, researchers at JCAP have made major advances toward this goal, and they now report the development of the first complete, efficient, safe, integrated solar-driven system for splitting water to create hydrogen fuels.

"This result was a stretch project milestone for the entire five years of JCAP as a whole, and not only have we achieved this goal, we also achieved it on time and on budget," says Caltech's Nate Lewis, George L. Argyros Professor and professor of chemistry, and the JCAP scientific director.

The new solar fuel generation system, or artificial leaf, is described in the August 27 online issue of the journal Energy and Environmental Science. The work was done by researchers in the laboratories of Lewis and Harry Atwater, director of JCAP and Howard Hughes Professor of Applied Physics and Materials Science.

"This accomplishment drew on the knowledge, insights and capabilities of JCAP, which illustrates what can be achieved in a Hub-scale effort by an integrated team," Atwater says. "The device reported here grew out of a multi-year, large-scale effort to define the design and materials components needed for an integrated solar fuels generator."


Solar Fuels Prototype in Operation
A fully integrated photoelectrochemical device performing unassisted solar water splitting for the production of hydrogen fuel. Credit: Erik Verlage and Chengxiang Xiang/Caltech

The new system consists of three main components: two electrodes—one photoanode and one photocathode—and a membrane. The photoanode uses sunlight to oxidize water molecules, generating protons and electrons as well as oxygen gas. The photocathode recombines the protons and electrons to form hydrogen gas. A key part of the JCAP design is the plastic membrane, which keeps the oxygen and hydrogen gases separate. If the two gases are allowed to mix and are accidentally ignited, an explosion can occur; the membrane lets the hydrogen fuel be separately collected under pressure and safely pushed into a pipeline.

Semiconductors such as silicon or gallium arsenide absorb light efficiently and are therefore used in solar panels. However, these materials also oxidize (or rust) on the surface when exposed to water, so cannot be used to directly generate fuel. A major advance that allowed the integrated system to be developed was previous work in Lewis's laboratory, which showed that adding a nanometers-thick layer of titanium dioxide (TiO2)—a material found in white paint and many toothpastes and sunscreens—onto the electrodes could prevent them from corroding while still allowing light and electrons to pass through. The new complete solar fuel generation system developed by Lewis and colleagues uses such a 62.5-nanometer-thick TiO2 layer to effectively prevent corrosion and improve the stability of a gallium arsenide–based photoelectrode.

Another key advance is the use of active, inexpensive catalysts for fuel production. The photoanode requires a catalyst to drive the essential water-splitting reaction. Rare and expensive metals such as platinum can serve as effective catalysts, but in its work the team discovered that it could create a much cheaper, active catalyst by adding a 2-nanometer-thick layer of nickel to the surface of the TiO2. This catalyst is among the most active known catalysts for splitting water molecules into oxygen, protons, and electrons and is a key to the high efficiency displayed by the device.

The photoanode was grown onto a photocathode, which also contains a highly active, inexpensive, nickel-molybdenum catalyst, to create a fully integrated single material that serves as a complete solar-driven water-splitting system.

A critical component that contributes to the efficiency and safety of the new system is the special plastic membrane that separates the gases and prevents the possibility of an explosion, while still allowing the ions to flow seamlessly to complete the electrical circuit in the cell. All of the components are stable under the same conditions and work together to produce a high-performance, fully integrated system. The demonstration system is approximately one square centimeter in area, converts 10 percent of the energy in sunlight into stored energy in the chemical fuel, and can operate for more than 40 hours continuously.

"This new system shatters all of the combined safety, performance, and stability records for artificial leaf technology by factors of 5 to 10 or more ," Lewis says.

"Our work shows that it is indeed possible to produce fuels from sunlight safely and efficiently in an integrated system with inexpensive components," Lewis adds, "Of course, we still have work to do to extend the lifetime of the system and to develop methods for cost-effectively manufacturing full systems, both of which are in progress."

Because the work assembled various components that were developed by multiple teams within JCAP, coauthor Chengxiang Xiang, who is co-leader of the JCAP prototyping and scale-up project, says that the successful end result was a collaborative effort. "JCAP's research and development in device design, simulation, and materials discovery and integration all funneled into the demonstration of this new device," Xiang says.

These results are published in a paper titled "A monolithically integrated, intrinsically safe, 10% efficient, solar-driven water-splitting system based on active, stable earth-abundant electrocatalysts in conjunction with tandem III-V light absorbers protected by amorphous TiO2 films." In addition to Lewis, Atwater, and Xiang, other Caltech coauthors include graduate student Erik Verlage, postdoctoral scholars Shu Hu and Ke Sun, material processing and integration research engineer Rui Liu, and JCAP mechanical engineer Ryan Jones. Funding was provided by the Office of Science at the U.S. Department of Energy, and the Gordon and Betty Moore Foundation.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech Chemists Solve Major Piece of Cellular Mystery

Team determines the architecture of a second subcomplex of the nuclear pore complex

Not just anything is allowed to enter the nucleus, the heart of eukaryotic cells where, among other things, genetic information is stored. A double membrane, called the nuclear envelope, serves as a wall, protecting the contents of the nucleus. Any molecules trying to enter or exit the nucleus must do so via a cellular gatekeeper known as the nuclear pore complex (NPC), or pore, that exists within the envelope.

How can the NPC be such an effective gatekeeper—preventing much from entering the nucleus while helping to shuttle certain molecules across the nuclear envelope? Scientists have been trying to figure that out for decades, at least in part because the NPC is targeted by a number of diseases, including some aggressive forms of leukemia and nervous system disorders such as a hereditary form of Lou Gehrig's disease. Now a team led by André Hoelz, assistant professor of biochemistry at Caltech, has solved a crucial piece of the puzzle.

In February of this year, Hoelz and his colleagues published a paper describing the atomic structure of the NPC's coat nucleoporin complex, a subcomplex that forms what they now call the outer rings (see illustration). Building on that work, the team has now solved the architecture of the pore's inner ring, a subcomplex that is central to the NPC's ability to serve as a barrier and transport facilitator. In order to the determine that architecture, which determines how the ring's proteins interact with each other, the biochemists built up the complex in a test tube and then systematically dissected it to understand the individual interactions between components. Then they validated that this is actually how it works in vivo, in a species of fungus.

For more than a decade, other researchers have suggested that the inner ring is highly flexible and expands to allow large macromolecules to pass through. "People have proposed some complicated models to explain how this might happen," says Hoelz. But now he and his colleagues have shown that these models are incorrect and that these dilations simply do not occur.

"Using an interdisciplinary approach, we solved the architecture of this subcomplex and showed that it cannot change shape significantly," says Hoelz. "It is a relatively rigid scaffold that is incorporated into the pore and basically just sits as a decoration, like pom-poms on a bicycle. It cannot dilate."

The new paper appears online ahead of print on August 27 in Science Express. The four co-lead authors on the paper are Caltech postdoctoral scholars Tobias Stuwe, Christopher J. Bley, and Karsten Thierbach, and graduate student Stefan Petrovic.


Crystal Structure of Fungal Channel Nucleoporin Complex
This video features a rotating three-dimensional crystal structure of the fungal channel nucleoporin complex bound to the adaptor nucleoporin Nic96. This interaction is the complex's sole site of attachment to the rest of the inner ring of the NPC. The channel nucleoporin complex borders the central transport channel and fills it with filamentous structures (phenylalanine-glycine repeats) that form a diffusion barrier and provide docking sites for proteins that ferry molecules across the nuclear envelope. Credit: Andre Hoelz/Caltech and Science

Together, the inner and outer rings make up the symmetric core of the NPC, a structure that includes 21 different proteins. The symmetric core is so named because of its radial symmetry (the two remaining subcomplexes of the NPC are specific to either the side that faces the cell's cytoplasm or the side that faces the nucleus and are therefore not symmetric). Having previously solved the structure of the coat nucleoporin complex and located it in the outer rings, the researchers knew that the remaining components that are not membrane anchored must make up the inner ring.

They started solving the architecture by focusing on the channel nucleoporin complex, or channel, which lines the central transport channel and is made up of three proteins, accounting for about half of the inner ring. This complex produces filamentous structures that serve as docking sites for specific proteins that ferry molecules across the nuclear envelope.

The biochemists employed bacteria to make the proteins associated with the inner ring in a test tube and mixed various combinations until they built the entire subcomplex. Once they had reconstituted the inner ring subcomplex, they were able to modify it to investigate how it is held together and which of its components are critical, and to determine how the channel is attached to the rest of the pore.

Hoelz and his team found that the channel is attached at only one site. This means that it cannot stretch significantly because such shape changes require multiple attachment points. Hoelz notes that a new electron microscopy study of the NPC published in 2013 by Martin Beck's group at the European Molecular Biology Laboratory (EMBL) in Heidelberg, Germany, indicated that the central channel is bigger than previously thought and wide enough to accommodate even the largest cargoes known to pass through the pore.

When the researchers introduced mutations that effectively eliminated the channel's single attachment, the complex could no longer be incorporated into the inner ring. After proving this in the test tube, they also showed this to be true in living cells.

"This whole complex is a very complicated machine to assemble. The cool thing here is that nature has found an elegant way to wait until the very end of the assembly of the nuclear pore to incorporate the channel," says Hoelz. "By incorporating the channel, you establish two things at once: you immediately form a barrier and you generate the ability for regulated transport to occur through the pore." Prior to the channel's incorporation, there is simply a hole through which macromolecules can freely pass.

Next, Hoelz and his colleagues used X-ray crystallography to determine the structure of the channel nucleoporin subcomplex bound to the adaptor nucleoporin Nic96, which is its only nuclear pore attachment site. X-ray crystallography involves shining X-rays on a crystallized sample and analyzing the pattern of rays reflected off the atoms in the crystal. Because the NPC is a large and complex molecular machine that also has many moving parts, they used an engineered antibody to essentially "superglue" many copies of the complex into place to form a nicely ordered crystalline sample. Then they analyzed hundreds of samples using Caltech's Molecular Observatory—a facility developed with support from the Gordon and Betty Moore Foundation that includes an automated X-ray beam line at the Stanford Synchrotron Radiation Laboratory that can be controlled remotely from Caltech—and the GM/CA beam line at the Advanced Photon Source at the Argonne National Laboratory. Eventually, they were able to determine the size, shape, and position of all the atoms of the channel nucleoporin subcomplex and its location within the full NPC.

"The crystal structure nailed it," Hoelz says. "There is no way that the channel is changing shape. All of that other work that, for more than 10 years, suggested it was dilating was wrong."

The researchers also solved a number of crystal structures from other parts of the NPC and determined how they interact with components of the inner ring. In doing so they demonstrated that one such interaction is critical for positioning the channel in the center of the inner ring. They found that exact positioning is needed for the proper export from the nucleus of mRNA and components of ribosomes, the cell's protein-making complexes, rendering it critical in the flow of genetic information from DNA to mRNA to protein.

Hoelz adds that now that the architectures of the inner and outer rings of the NPC are known, getting an atomic structure of the entire symmetric core is "a sprint to the summit."

"When I started at Caltech, I thought it might take another 10, 20 years to do this," he says. "In the end, we have really only been working on this for four and a half years, and the thing is basically tackled. I want to emphasize that this kind of work is not doable everywhere. The people who worked on this are truly special, talented, and smart; and they worked day and night on this for years."

Ultimately, Hoelz says he would like to understand how the NPC works in great detail so that he might be able to generate therapies for diseases associated with the dysfunction of the complex. He also dreams of building up an entire pore in the test tube so that he can fully study it and understand what happens as it is modified in various ways. "Just as they did previously when I said that I wanted to solve the atomic structure of the nuclear pore, people will say that I'm crazy for trying to do this," he says. "But if we don't do it, it is likely that nobody else will."

The paper, "Architecture of the fungal nuclear pore inner ring complex," had a number of additional Caltech authors: Sandra Schilbach (now of the Max Planck Institute of Biophysical Chemistry), Daniel J. Mayo, Thibaud Perriches, Emily J. Rundlet, Young E. Jeon, Leslie N. Collins, Ferdinand M. Huber, and Daniel H. Lin. Additional coauthors include Marcin Paduch, Akiko Koide, Vincent Lu, Shohei Koide, and Anthony A. Kossiakoff of the University of Chicago; and Jessica Fischer and Ed Hurt of Heidelberg University.

 

 

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

After a Half Century, the Exotic Pentaquark Particle is Found

In July, scientists at the Large Hadron Collider (LHC) reported the discovery of the pentaquark, a long-sought particle first predicted to exist in the 1960s as a consequence of the theory of elementary particles and their interactions proposed by Murray Gell-Mann, Caltech's Robert Andrews Millikan Professor of Theoretical Physics, Emeritus.

In work for which he won the Nobel Prize in Physics in 1969, Gell-Mann introduced the concept of the quark—a fundamental building block of matter. Quarks come in six types, known as "flavors": up, down, top, bottom, strange, and charm. As described in his model, groups of quarks combine into composite particles called hadrons. Combining a quark and an antiquark (a quark's antimatter equivalent) creates a type of hadron called a meson, while baryons are hadrons composed of three quarks. Protons, for example, have two up quarks and one down quark, while neutrons have one up and two down quarks. Gell-Mann's scheme also allowed for more exotic forms of composite particles, including tetraquarks, made of four quarks, and the pentaquark, consisting of four quarks and an antiquark.

The pentaquark was detected at the LHC—the most powerful particle accelerator on Earth—by scientists carrying out the "beauty" experiment, or LHCb. The LHC accelerates protons around a ring almost five miles wide to nearly the speed of light, producing two proton beams that careen toward each other. A small fraction of the protons collide, creating other particles in the process. During investigations of the behavior of one such particle, an unstable three-quark object known as the bottom lambda baryon that decays quickly once formed, LHCb researchers observed unusually heavy objects, each with about 4.5 times the mass of a proton. After further analysis, the researchers concluded that the objects were pentaquarks composed of two up quarks, one down quark, one charm quark, and one anticharm quark. A paper describing the discovery has been published in the journal Physical Review Letters.

It is thought that pentaquarks and other exotic particles may form naturally in violent environments such as exploding stars and would have been created during the Big Bang. A better understanding of these complex arrangements of quarks could offer insight into the forces that hold together all matter as well as the earliest moments of the universe.

"This is part of a long process of discovery of particle states," said Gell-Mann in a statement released by the Santa Fe Institute, where he currently is a Distinguished Fellow. "[In the future] they may find more and more of them, made of quarks and antiquarks and various combinations."

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech Announces Discovery in Fundamental Physics

When the transistor was invented in 1947 at Bell Labs, few could have foreseen the future impact of the device. This fundamental development in science and engineering was critical to the invention of handheld radios, led to modern computing, and enabled technologies such as the smartphone. This is one of the values of basic research.

In a similar fashion, a branch of fundamental physics research, the study of so-called correlated electrons, focuses on interactions between the electrons in metals.

The key to understanding these interactions and the unique properties they produce—information that could lead to the development of novel materials and technologies—is to experimentally verify their presence and physically probe the interactions at microscopic scales. To this end, Caltech's Thomas F. Rosenbaum and colleagues at the University of Chicago and the Argonne National Laboratory recently used a synchrotron X-ray source to investigate the existence of instabilities in the arrangement of the electrons in metals as a function of both temperature and pressure, and to pinpoint, for the first time, how those instabilities arise. Rosenbaum, professor of physics and holder of the Sonja and William Davidow Presidential Chair, is the corresponding author on the paper that was published on July 27, 2015, in the journal Nature Physics.

"We spent over 10 years developing the instrumentation to perform these studies," says Yejun Feng of Argonne National Laboratory, a coauthor of the paper. "We now have a very unique capability that's due to the long-term relationship between Dr. Rosenbaum and the facilities at the Argonne National Laboratory."

Within atoms, electrons are organized into orbital shells and subshells. Although they are often depicted as physical entities, orbitals actually represent probability distributions—regions of space where electrons have a certain likelihood of being found in a particular element at a particular energy. The characteristic electron configuration of a given element explains that element's peculiar properties.

The work in correlated electrons looks at a subset of electrons. Metals, as an example, have an unfilled outermost orbital and electrons are free to move from atom to atom. Thus, metals are good electrical conductors. When metal atoms are tightly packed into lattices (or crystals) these electrons mingle together into a "sea" of electrons. The metallic element mercury is liquid at room temperature, in part due to its electron configuration, and shows very little resistance to electric current due to its electron configuration. At 4 degrees above absolute zero (just barely above -460 degrees Fahrenheit), mercury's electron arrangement and other properties create communal electrons that show no resistance to electric current, a state known as superconductivity.

Mercury's superconductivity and similar phenomena are due to the existence of many pairs of correlated electrons. In superconducting states, correlated electrons pair to form an elastic, collective state through an excitation in the crystal lattice known as a phonon (specifically, a periodic, collective excitation of the atoms). The electrons are then able to move cooperatively in the elastic state through a material without energy loss.

Electrons in crystals can interact in many ways with the periodic structure of the underlying atoms. Sometimes the electrons modulate themselves periodically in space. The question then arises as to whether this "charge order" derives from the interactions of the electrons with the atoms, a theory first proposed more than 60 years ago, or solely from interactions among the sea of electrons themselves. This question was the focus of the Nature Physics study. Electrons also behave as microscopic magnets and can demonstrate "spin order," which raises similar questions about the origin of the local magnetism.

To see where the charge order arises, the researchers turned to the Advanced Photon Source at Argonne. The Photon Source is a synchrotron (a relative of the cyclotron, commonly known as an "atom-smasher"). These machines generate intense X-ray beams that can be used for X-ray diffraction studies. In X-ray diffraction, the patterns of scattered X-rays are used to provide information about repeating structures with wavelengths at the atomic scale.

In the experiment, the researchers used the X-ray beams to investigate charge-order effects in two metals, chromium and niobium diselenide, at pressures ranging from 0 (a vacuum) to 100 kilobar (100,000 times normal atmospheric pressure) and at temperatures ranging from 3 to 300 K (or -454 to 80 degrees Fahrenheit). Niobium diselenide was selected because it has a high degree of charge order, while chromium, in contrast, has a high degree of spin order. 

The researchers found that there is a simple correlation between pressure and how the communal electrons organize themselves within the crystal. Materials with completely different types of crystal structures all behave similarly. "These sorts of charge- and spin-order phenomena have been known for a long time, but their underlying mechanisms have not been understood until now," says Rosenbaum.

Paper coauthors Jasper van Wezel, formerly of Argonne National Laboratory and presently of the Institute for Theoretical Physics at the University of Amsterdam, and Peter Littlewood, a professor at the University of Chicago and the director of Argonne National Laboratory, helped to provide a new theoretical perspective to explain the experimental results.

Rosenbaum and colleagues point out that there are no immediate practical applications of the results. However, Rosenbaum notes, "This work should have applicability to new materials as well as to the kind of interactions that are useful to create magnetic states that are often the antecedents of superconductors," says Rosenbaum.

"The attraction of this sort of research is to ask fundamental questions that are ubiquitous in nature," says Rosenbaum. "I think it is very much a Caltech tradition to try to develop new tools that can interrogate materials in ways that illuminate the fundamental aspects of the problem." He adds, "There is real power in being able to have general microscopic insights to develop the most powerful breakthroughs."

The coauthors on the paper, titled "Itinerant density wave instabilities at classical and quantum critical points," are Yejun Feng and Peter Littlewood of the Argonne National Laboratory, Jasper van Wezel of the University of Amsterdam, Daniel M. Silevitch and Jiyang Wang of the University of Chicago, and Felix Flicker of the University of Bristol. Work performed at the Argonne National Laboratory was supported by the U.S. Department of Energy. Work performed at the University of Chicago was funded by the National Science Foundation. Additional support was received from the Netherlands Organization for Scientific Research.

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech-Led Team Looks in Detail at the April 2015 Earthquake in Nepal

For more than 20 years, Caltech geologist Jean-Philippe Avouac has collaborated with the Department of Mines and Geology of Nepal to study the Himalayas—the most active, above-water mountain range on Earth—to learn more about the processes that build mountains and trigger earthquakes. Over that period, he and his colleagues have installed a network of GPS stations in Nepal that allows them to monitor the way Earth's crust moves during and in between earthquakes. So when he heard on April 25 that a magnitude 7.8 earthquake had struck near Gorkha, Nepal, not far from Kathmandu, he thought he knew what to expect—utter devastation throughout Kathmandu and a death toll in the hundreds of thousands.

"At first when I saw the news trickling in from Kathmandu, I thought there was a problem of communication, that we weren't hearing the full extent of the damage," says Avouac, Caltech's Earle C. Anthony Professor of Geology. "As it turns out, there was little damage to the regular dwellings, and thankfully, as a result, there were far fewer deaths than I originally anticipated."

Using data from the GPS stations, an accelerometer that measures ground motion in Kathmandu, data from seismological stations around the world, and radar images collected by orbiting satellites, an international team of scientists led by Caltech has pieced together the first complete account of what physically happened during the Gorkha earthquake—a picture that explains how the large earthquake wound up leaving the majority of low-story buildings unscathed while devastating some treasured taller structures.

The findings are described in two papers that now appear online. The first, in the journal Nature Geoscience, is based on an analysis of seismological records collected more than 1,000 kilometers from the epicenter and places the event in the context of what scientists knew of the seismic setting near Gorkha before the earthquake. The second paper, appearing in Science Express, goes into finer detail about the rupture process during the April 25 earthquake and how it shook the ground in Kathmandu.


Build Up and Release of Strain on Himalaya Megathrust (caption and credit in video attached in upper right)

In the first study, the researchers show that the earthquake occurred on the Main Himalayan Thrust (MHT), the main megathrust fault along which northern India is pushing beneath Eurasia at a rate of about two centimeters per year, driving the Himalayas upward. Based on GPS measurements, scientists know that a large portion of this fault is "locked." Large earthquakes typically release stress on such locked faults—as the lower tectonic plate (here, the Indian plate) pulls the upper plate (here, the Eurasian plate) downward, strain builds in these locked sections until the upper plate breaks free, releasing strain and producing an earthquake. There are areas along the fault in western Nepal that are known to be locked and have not experienced a major earthquake since a big one (larger than magnitude 8.5) in 1505. But the Gorkha earthquake ruptured only a small fraction of the locked zone, so there is still the potential for the locked portion to produce a large earthquake.

"The Gorkha earthquake didn't do the job of transferring deformation all the way to the front of the Himalaya," says Avouac. "So the Himalaya could certainly generate larger earthquakes in the future, but we have no idea when."

The epicenter of the April 25 event was located in the Gorkha District of Nepal, 75 kilometers to the west-northwest of Kathmandu, and propagated eastward at a rate of about 2.8 kilometers per second, causing slip in the north-south direction—a progression that the researchers describe as "unzipping" a section of the locked fault.

"With the geological context in Nepal, this is a place where we expect big earthquakes. We also knew, based on GPS measurements of the way the plates have moved over the last two decades, how 'stuck' this particular fault was, so this earthquake was not a surprise," says Jean Paul Ampuero, assistant professor of seismology at Caltech and coauthor on the Nature Geoscience paper. "But with every earthquake there are always surprises."


Propagation of April 2015 Mw 7.8 Gorkha Earthquake (caption and credit in video attached in upper right)

In this case, one of the surprises was that the quake did not rupture all the way to the surface. Records of past earthquakes on the same fault—including a powerful one (possibly as strong as magnitude 8.4) that shook Kathmandu in 1934—indicate that ruptures have previously reached the surface. But Avouac, Ampuero, and their colleagues used satellite Synthetic Aperture Radar data and a technique called back projection that takes advantage of the dense arrays of seismic stations in the United States, Europe, and Australia to track the progression of the earthquake, and found that it was quite contained at depth. The high-frequency waves that were largely produced in the lower section of the rupture occurred at a depth of about 15 kilometers.

"That was good news for Kathmandu," says Ampuero. "If the earthquake had broken all the way to the surface, it could have been much, much worse."

The researchers note, however, that the Gorkha earthquake did increase the stress on the adjacent portion of the fault that remains locked, closer to Kathmandu. It is unclear whether this additional stress will eventually trigger another earthquake or if that portion of the fault will "creep," a process that allows the two plates to move slowly past one another, dissipating stress. The researchers are building computer models and monitoring post-earthquake deformation of the crust to try to determine which scenario is more likely.

Another surprise from the earthquake, one that explains why many of the homes and other buildings in Kathmandu were spared, is described in the Science Express paper. Avouac and his colleagues found that for such a large-magnitude earthquake, high-frequency shaking in Kathmandu was actually relatively mild. And it is high-frequency waves, with short periods of vibration of less than one second, that tend to affect low-story buildings. The Nature Geoscience paper showed that the high-frequency waves that the quake produced came from the deeper edge of the rupture, on the northern end away from Kathmandu.

The GPS records described in the Science Express paper show that within the zone that experienced the greatest amount of slip during the earthquake—a region south of the sources of high-frequency waves and closer to Kathmandu—the onset of slip on the fault was actually very smooth. It took nearly two seconds for the slip rate to reach its maximum value of one meter per second. In general, the more abrupt the onset of slip during an earthquake, the more energetic the radiated high-frequency seismic waves. So the relatively gradual onset of slip in the Gorkha event explains why this patch, which experienced a large amount of slip, did not generate many high-frequency waves.

"It would be good news if the smooth onset of slip, and hence the limited induced shaking, were a systematic property of the Himalayan megathrust fault, or of megathrust faults in general." says Avouac. "Based on observations from this and other megathrust earthquakes, this is a possibility."

In contrast to what they saw with high-frequency waves, the researchers found that the earthquake produced an unexpectedly large amount of low-frequency waves with longer periods of about five seconds. This longer-period shaking was responsible for the collapse of taller structures in Kathmandu, such as the Dharahara Tower, a 60-meter-high tower that survived larger earthquakes in 1833 and 1934 but collapsed completely during the Gorkha quake.

To understand this, consider plucking the strings of a guitar. Each string resonates at a certain natural frequency, or pitch, depending on the length, composition, and tension of the string. Likewise, buildings and other structures have a natural pitch or frequency of shaking at which they resonate; in general, the taller the building, the longer the period at which it resonates. If a strong earthquake causes the ground to shake with a frequency that matches a building's pitch, the shaking will be amplified within the building, and the structure will likely collapse.

Turning to the GPS records from two of Avouac's stations in the Kathmandu Valley, the researchers found that the effect of the low-frequency waves was amplified by the geological context of the Kathmandu basin. The basin is an ancient lakebed that is now filled with relatively soft sediment. For about 40 seconds after the earthquake, seismic waves from the quake were trapped within the basin and continued to reverberate, ringing like a bell with a frequency of five seconds.

"That's just the right frequency to damage tall buildings like the Dharahara Tower because it's close to their natural period," Avouac explains.

In follow-up work, Domniki Asimaki, professor of mechanical and civil engineering at Caltech, is examining the details of the shaking experienced throughout the basin. On a recent trip to Kathmandu, she documented very little damage to low-story buildings throughout much of the city but identified a pattern of intense shaking experienced at the edges of the basin, on hilltops or in the foothills where sediment meets the mountains. This was largely due to the resonance of seismic waves within the basin.

Asimaki notes that Los Angeles is also built atop sedimentary deposits and is surrounded by hills and mountain ranges that would also be prone to this type of increased shaking intensity during a major earthquake.

"In fact," she says, "the buildings in downtown Los Angeles are much taller than those in Kathmandu and therefore resonate with a much lower frequency. So if the same shaking had happened in L.A., a lot of the really tall buildings would have been challenged."

That points to one of the reasons it is important to understand how the land responded to the Gorkha earthquake, Avouac says. "Such studies of the site effects in Nepal provide an important opportunity to validate the codes and methods we use to predict the kind of shaking and damage that would be expected as a result of earthquakes elsewhere, such as in the Los Angeles Basin."

Additional authors on the Nature Geoscience paper, "Lower edge of locked Main Himalayan Thrust unzipped by the 2015 Gorkha earthquake," are Lingsen Meng (PhD '12) of UC Los Angeles, Shengji Wei of Nanyang Technological University in Singapore, and Teng Wang of Southern Methodist University. The lead author on the Science paper, "Slip pulse and resonance of Kathmandu basin during the 2015 Mw 7.8 Gorkha earthquake, Nepal imaged with geodesy" is John Galetzka, formerly an associate staff geodesist at Caltech and now a project manager at UNAVCO in Boulder, Colorado. Caltech research geodesist Joachim Genrich is also a coauthor, as are Susan Owen and Angelyn Moore of JPL. For a full list of authors, please see the paper.

The Nepal Geodetic Array was funded by Caltech, the Gordon and Betty Moore Foundation, and the National Science Foundation. Additional funding for the Science study came from the Department of Foreign International Development (UK), the Royal Society (UK), the United Nations Development Programme, and the Nepal Academy for Science and Technology, as well as NASA and the Department of Foreign International Development.

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Details of the April 2015 Earthquake in Nepal
Listing Title: 
Details of the April 2015 Earthquake in Nepal
Writer: 
Exclude from News Hub: 
No
Short Title: 
Details of the April 2015 Earthquake in Nepal
News Type: 
Research News

Caltech Astronomers Unveil a Distant Protogalaxy Connected to the Cosmic Web

A team of astronomers led by Caltech has discovered a giant swirling disk of gas 10 billion light-years away—a galaxy-in-the-making that is actively being fed cool primordial gas tracing back to the Big Bang. Using the Caltech-designed and -built Cosmic Web Imager (CWI) at Palomar Observatory, the researchers were able to image the protogalaxy and found that it is connected to a filament of the intergalactic medium, the cosmic web made of diffuse gas that crisscrosses between galaxies and extends throughout the universe.

The finding provides the strongest observational support yet for what is known as the cold-flow model of galaxy formation. That model holds that in the early universe, relatively cool gas funneled down from the cosmic web directly into galaxies, fueling rapid star formation.

A paper describing the finding and how CWI made it possible currently appears online and will be published in the August 13 print issue of the journal Nature.

"This is the first smoking-gun evidence for how galaxies form," says Christopher Martin, professor of physics at Caltech, principal investigator on CWI, and lead author of the new paper. "Even as simulations and theoretical work have increasingly stressed the importance of cold flows, observational evidence of their role in galaxy formation has been lacking."


Caltech Astronomers Discuss Findings on Galaxy Formation

The protogalactic disk the team has identified is about 400,000 light-years across—about four times larger in diameter than our Milky Way. It is situated in a system dominated by two quasars, the closest of which, UM287, is positioned so that its emission is beamed like a flashlight, helping to illuminate the cosmic web filament feeding gas into the spiraling protogalaxy.

Last year, Sebastiano Cantalupo, then of UC Santa Cruz (now of ETH Zurich) and his colleagues published a paper, also in Nature, announcing the discovery of what they thought was a large filament next to UM287. The feature they observed was brighter than it should have been if indeed it was only a filament. It seemed that there must be something else there.

In September 2014, Martin and his colleagues, including Cantalupo, decided to follow up with observations of the system with CWI. As an integral field spectrograph, CWI allowed the team to collect images around UM287 at hundreds of different wavelengths simultaneously, revealing details of the system's composition, mass distribution, and velocity.

Martin and his colleagues focused on a range of wavelengths around an emission line in the ultraviolet known as the Lyman-alpha line. That line, a fingerprint of atomic hydrogen gas, is commonly used by astronomers as a tracer of primordial matter.

The researchers collected a series of spectral images that combined to form a multiwavelength map of a patch of sky around the two quasars. This data delineated areas where gas is emitting in the Lyman-alpha line, and indicated the velocities with which this gas is moving with respect to the center of the system.

"The images plainly show that there is a rotating disk—you can see that one side is moving closer to us and the other is moving away. And you can also see that there's a filament that extends beyond the disk," Martin says. Their measurements indicate that the disk is rotating at a rate of about 400 kilometers per second, somewhat faster than the Milky Way's own rate of rotation.

"The filament has a more or less constant velocity. It is basically funneling gas into the disk at a fixed rate," says Matt Matuszewski (PhD '12), an instrument scientist in Martin's group and coauthor on the paper. "Once the gas merges with the disk inside the dark-matter halo, it is pulled around by the rotating gas and dark matter in the halo." Dark matter is a form of matter that we cannot see that is believed to make up about 27 percent of the universe. Galaxies are thought to form within extended halos of dark matter.

The new observations and measurements provide the first direct confirmation of the so-called cold-flow model of galaxy formation.

Hotly debated since 2003, that model stands in contrast to the standard, older view of galaxy formation. The standard model said that when dark-matter halos collapse, they pull a great deal of normal matter in the form of gas along with them, heating it to extremely high temperatures. The gas then cools very slowly, providing a steady but slow supply of cold gas that can form stars in growing galaxies.

That model seemed fine until 1996, when Chuck Steidel, Caltech's Lee A. DuBridge Professor of Astronomy, discovered a distant population of galaxies producing stars at a very high rate only two billion years after the Big Bang. The standard model cannot provide the prodigious fuel supply for these rapidly forming galaxies.

The cold-flow model provided a potential solution. Theorists suggested that relatively cool gas, delivered by filaments of the cosmic web, streams directly into protogalaxies. There, it can quickly condense to form stars. Simulations show that as the gas falls in, it contains tremendous amounts of angular momentum, or spin, and forms extended rotating disks.

"That's a direct prediction of the cold-flow model, and this is exactly what we see—an extended disk with lots of angular momentum that we can measure," says Martin.

Phil Hopkins, assistant professor of theoretical astrophysics at Caltech, who was not involved in the study, finds the new discovery "very compelling."

"As a proof that a protogalaxy connected to the cosmic web exists and that we can detect it, this is really exciting," he says. "Of course, now you want to know a million things about what the gas falling into galaxies is actually doing, so I'm sure there is going to be more follow up."

Martin notes that the team has already identified two additional disks that appear to be receiving gas directly from filaments of the cosmic web in the same way.

Additional Caltech authors on the paper, "A giant protogalactic disk linked to the cosmic web," are principal research scientist Patrick Morrissey, research scientist James D. Neill, and instrument scientist Anna Moore from the Caltech Optical Observatories. J. Xavier Prochaska of UC Santa Cruz and former Caltech graduate student Daphne Chang, who is deceased, are also coauthors. The Cosmic Web Imager was funded by grants from the National Science Foundation and Caltech.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Teaser Image: 

"Failed Stars" Host Powerful Auroral Displays

Caltech astronomers say brown dwarfs behave more like planets than stars

Brown dwarfs are relatively cool, dim objects that are difficult to detect and hard to classify. They are too massive to be planets, yet possess some planetlike characteristics; they are too small to sustain hydrogen fusion reactions at their cores, a defining characteristic of stars, yet they have starlike attributes.

By observing a brown dwarf 20 light-years away using both radio and optical telescopes, a team led by Gregg Hallinan, assistant professor of astronomy at Caltech, has found another feature that makes these so-called failed stars more like supersized planets—they host powerful auroras near their magnetic poles.

The findings appear in the July 30 issue of the journal Nature.

"We're finding that brown dwarfs are not like small stars in terms of their magnetic activity; they're like giant planets with hugely powerful auroras," says Hallinan. "If you were able to stand on the surface of the brown dwarf we observed—something you could never do because of its extremely hot temperatures and crushing surface gravity—you would sometimes be treated to a fantastic light show courtesy of auroras hundreds of thousands of times more powerful than any detected in our solar system."

In the early 2000s, astronomers began finding that brown dwarfs emit radio waves. At first, everyone assumed that the brown dwarfs were creating the radio waves in basically the same way that stars do—through the action of an extremely hot atmosphere, or corona, heated by magnetic activity near the object's surface. But brown dwarfs do not generate large flares and charged-particle emissions in the way that our sun and other stars do, so the radio emissions were surprising.

While in graduate school, in 2006, Hallinan discovered that brown dwarfs can actually pulse at radio frequencies. "We see a similar pulsing phenomenon from planets in our solar system," says Hallinan, "and that radio emission is actually due to auroras." Since then he has wondered if the radio emissions seen on brown dwarfs might be caused by auroras.

Auroral displays result when charged particles, carried by the stellar wind for example, manage to enter a planet's magnetosphere, the region where such charged particles are influenced by the planet's magnetic field. Once within the magnetosphere, those particles get accelerated along the planet's magnetic field lines to the planet's poles, where they collide with gas atoms in the atmosphere and produce the bright emissions associated with auroras.

Following his hunch, Hallinan and his colleagues conducted an extensive observation campaign of a brown dwarf called LSRJ 1835+3259, using the National Radio Astronomy Observatory's Very Large Array (VLA), the most powerful radio telescope in the world, as well as optical instruments that included Palomar's Hale Telescope and the W. M. Keck Observatory's telescopes.


This movie shows the brown dwarf, LSRJ 1835+3259, as seen with the National Radio Astronomy Observatory's Very Large Array, pulsing as a result of the process that creates powerful auroras.
Credit: Stephen Bourke/Caltech

Using the VLA they detected a bright pulse of radio waves that appeared as the brown dwarf rotated around. The object rotates every 2.84 hours, so the researchers were able to watch nearly three full rotations over the course of a single night.

Next, the astronomers used the Hale Telescope to observe that the brown dwarf varied optically on the same period as the radio pulses. Focusing on one of the spectral lines associated with excited hydrogen—the h-alpha emission line—they found that the object's brightness varied periodically.

Finally, Hallinan and his colleagues used the Keck telescopes to measure precisely the brightness of the brown dwarf over time—no simple feat given that these objects are many thousands of times fainter than our own sun. Hallinan and his team were able to establish that this hydrogen emission is a signature of auroras near the surface of the brown dwarf.

"As the electrons spiral down toward the atmosphere, they produce radio emissions, and then when they hit the atmosphere, they excite hydrogen in a process that occurs at Earth and other planets, albeit tens of thousands of times more intense," explains Hallinan. "We now know that this kind of auroral behavior is extending all the way from planets up to brown dwarfs."

In the case of brown dwarfs, charged particles cannot be driven into their magnetosphere by a stellar wind, as there is no stellar wind to do so. Hallinan says that some other source, such as an orbiting planet moving through the brown dwarf's magnetosphere, may be generating a current and producing the auroras. "But until we map the aurora accurately, we won't be able to say where it's coming from," he says.

He notes that brown dwarfs offer a convenient stepping stone to studying exoplanets, planets orbiting stars other than our own sun. "For the coolest brown dwarfs we've discovered, their atmosphere is pretty similar to what we would expect for many exoplanets, and you can actually look at a brown dwarf and study its atmosphere without having a star nearby that's a factor of a million times brighter obscuring your observations," says Hallinan.

Just as he has used measurements of radio waves to determine the strength of magnetic fields around brown dwarfs, he hopes to use the low-frequency radio observations of the newly built Owens Valley Long Wavelength Array to measure the magnetic fields of exoplanets. "That could be particularly interesting because whether or not a planet has a magnetic field may be an important factor in habitability," he says. "I'm trying to build a picture of magnetic field strength and topology and the role that magnetic fields play as we go from stars to brown dwarfs and eventually right down into the planetary regime."

The work, "Magnetospherically driven optical and radio aurorae at the end of the main sequence," was supported by funding from the National Science Foundation. Additional authors on the paper include Caltech senior postdoctoral scholar Stephen Bourke, Caltech graduate students Sebastian Pineda and Melodie Kao, Leon Harding of JPL, Stuart Littlefair of the University of Sheffield, Garret Cotter of the University of Oxford, Ray Butler of National University of Ireland, Galway, Aaron Golden of Yeshiva University, Gibor Basri of UC Berkeley, Gerry Doyle of Armagh Observatory, Svetlana Berdyugina of the Kiepenheuer Institute for Solar Physics, Alexey Kuznetsov of the Institute of Solar-Terrestrial Physics in Irkutsk, Russia, Michael Rupen of the National Radio Astronomy Observatory, and Antoaneta Antonova of Sofia University.

 

 

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Powerful Auroras Shed Light On Brown Dwarfs
Writer: 
Exclude from News Hub: 
No
Short Title: 
Powerful Auroras Shed Light On Brown Dwarfs
News Type: 
Research News

Mosquitoes Use Smell to See Their Hosts

On summer evenings, we try our best to avoid mosquito bites by dousing our skin with bug repellents and lighting citronella candles. These efforts may keep the mosquitoes at bay for a while, but no solution is perfect because the pests have evolved to use a triple threat of visual, olfactory, and thermal cues to home in on their human targets, a new Caltech study suggests.

The study, published by researchers in the laboratory of Michael Dickinson, the Esther M. and Abe M. Zarem Professor of Bioengineering, appears in the July 17 online version of the journal Current Biology.

When an adult female mosquito needs a blood meal to feed her young, she searches for a host—often a human. Many insects, mosquitoes included, are attracted by the odor of the carbon dioxide (CO2) gas that humans and other animals naturally exhale. However, mosquitoes can also pick up other cues that signal a human is nearby. They use their vision to spot a host and thermal sensory information to detect body heat.

But how do the mosquitoes combine this information to map out the path to their next meal?

To find out how and when the mosquitoes use each type of sensory information, the researchers released hungry, mated female mosquitoes into a wind tunnel in which different sensory cues could be independently controlled. In one set of experiments, a high-concentration CO2 plume was injected into the tunnel, mimicking the signal created by the breath of a human. In control experiments, the researchers introduced a plume consisting of background air with a low concentration of CO2. For each experiment, researchers released 20 mosquitoes into the wind tunnel and used video cameras and 3-D tracking software to follow their paths.

When a concentrated CO2 plume was present, the mosquitos followed it within the tunnel as expected, whereas they showed no interest in a control plume consisting of background air.

"In a previous experiment with fruit flies, we found that exposure to an attractive odor led the animals to be more attracted to visual features," says Floris van Breugel, a postdoctoral scholar in Dickinson's lab and first author of the study. "This was a new finding for flies, and we suspected that mosquitoes would exhibit a similar behavior. That is, we predicted that when the mosquitoes were exposed to CO2, which is an indicator of a nearby host, they would also spend a lot of time hovering near high-contrast objects, such as a black object on a neutral background."

To test this hypothesis, van Breugel and his colleagues did the same CO2 plume experiment, but this time they provided a dark object on the floor of the wind tunnel. They found that in the presence of the carbon dioxide plumes, the mosquitoes were attracted to the dark high-contrast object. In the wind tunnel with no CO2 plume, the insects ignored the dark object entirely.

While it was no surprise to see the mosquitoes tracking a CO2 plume, "the new part that we found is that the CO2 plume increases the likelihood that they'll fly toward an object. This is particularly interesting because there's no CO2 down near that object—it's about 10 centimeters away," van Breugel says. "That means that they smell the CO2, then they leave the plume, and several seconds later they continue flying toward this little object. So you could think of it as a type of memory or lasting effect."

Next, the researchers wanted to see how a mosquito factors thermal information into its flight path. It is difficult to test, van Breugel says. "Obviously, we know that if you have an object in the presence of a CO2 plume—warm or cold—they will fly toward it because they see it," he says. "So we had to find a way to separate the visual attraction from the thermal attraction."

To do this, the researchers constructed two glass objects that were coated with a clear chemical substance that made it possible to heat them to any desired temperature. They heated one object to 37 degrees Celsius (approximately human body temperature) and allowed one to remain at room temperature, and then placed them on the floor of the wind tunnel with and without CO2 plumes, and observed mosquito behavior. They found that mosquitoes showed a preference for the warm object. But contrary to the mosquitoes' visual attraction to objects, the preference for warmth was not dependent on the presence of CO2.

"These experiments show that the attraction to a visual feature and the attraction to a warm object are separate. They are independent, and they don't have to happen in order, but they do often happen in this particular order because of the spatial arrangement of the stimuli: a mosquito can see a visual feature from much further away, so that happens first. Only when the mosquito gets closer does it detect an object's thermal signature," van Breugel says.

Information gathered from all of these experiments enabled the researchers to create a model of how the mosquito finds its host over different distances. They hypothesize that from 10 to 50 meters away, a mosquito smells a host's CO2 plume. As it flies closer—to within 5 to 15 meters—it begins to see the host. Then, guided by visual cues that draw it even closer, the mosquito can sense the host's body heat. This occurs at a distance of less than a meter.

"Understanding how brains combine information from different senses to make appropriate decisions is one of the central challenges in neuroscience," says Dickinson, the principal investigator of the study. "Our experiments suggest that female mosquitoes do this in a rather elegant way when searching for food. They only pay attention to visual features after they detect an odor that indicates the presence of a host nearby. This helps ensure that they don't waste their time investigating false targets like rocks and vegetation. Our next challenge is to uncover the circuits in the brain that allow an odor to so profoundly change the way they respond to a visual image."

The work provides researchers with exciting new information about insect behavior and may even help companies design better mosquito traps in the future. But it also paints a bleak picture for those hoping to avoid mosquito bites.

"Even if it were possible to hold one's breath indefinitely," the authors note toward the end of the paper, "another human breathing nearby, or several meters upwind, would create a CO2 plume that could lead mosquitoes close enough to you that they may lock on to your visual signature. The strongest defense is therefore to become invisible, or at least visually camouflaged. Even in this case, however, mosquitoes could still locate you by tracking the heat signature of your body . . . The independent and iterative nature of the sensory-motor reflexes renders mosquitoes' host seeking strategy annoyingly robust."

These results were published in a paper titled "Mosquitoes use vision to associate odor plumes with thermal targets." In addition to Dickinson and van Breugel, the other authors are Jeff Riffell and Adrienne Fairhall from the University of Washington. The work was funded by a grant from the National Institutes of Health.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news