Kip Thorne Discusses First Discovery of Thorne-Żytkow Object

In 1975, Kip Thorne (BS '62, and the Richard P. Feynman Professor of Theoretical Physics, Emeritus) and then-Caltech postdoctoral fellow Anna Żytkow sought the answer to an intriguing question: Would it be possible to have a star that had a neutron star as its core—that is, a hot, dense star composed entirely of neutrons within another more traditional star? Thorne and Żytkow predicted that if a neutron star were at the core of another star, the host star would be a red supergiant—an extremely large, luminous star—and that such red supergiants would have peculiar abundances of elements. Researchers who followed this line of inquiry referred to this hypothetical type of star as a Thorne-Żytkow object (TŻO).

Nearly 40 years later, astronomers believe they may have found such an object: a star labeled HV 2112 and located in the Small Magellanic Cloud, a dwarf galaxy that is a near neighbor of the Milky Way and visible to the naked eye. HV 2112 was identified as a TŻO candidate with the 6.5-meter Magellan Clay telescope on Las Campanas in Chile by Emily Levesque (University of Colorado), Philip Massey (Lowell Observatory; BS '75, MS '75, Caltech), Żytkow (now at the University of Cambridge), and Nidia Morrell (also at the University of Cambridge).

We recently sat down with Thorne to ask how it feels to have astronomers discover something whose existence he postulated decades before.

When you came up with the idea of TŻOs, were you trying to explain anything that had been observed, or was it a simple "what if?" speculation?

It was totally theoretical. We weren't the first people to ask the question either. In the mid-1930s, theoretical physicist George Gamow speculated about these kinds of objects and wondered if even our sun might have a neutron star in its core. That was soon after Caltech's Fritz Zwicky conceived the idea of a neutron star. But Gamow never did anything quantitative with his speculations.

The idea of seriously pursuing what these things might look like was due to Bohdan Paczynski, a superb astrophysicist on the faculty of the University of Warsaw. In the early 1970s, he would shuttle back and forth between Caltech, where he would spend about three months a year, and Warsaw, where he stayed for nine months. He had a real advantage over everybody else during this era when people were trying to understand stellar structure and stellar evolution in depth. Nine months of the year he didn't have a computer available, so he had to think. Then during the three months he was at Caltech, he could compute.

Paczynski was the leading person in the world in understanding the late stages of the evolution of stars. He suggested to his postdoctoral student Anna Żytkow that she look into this idea of stars with neutron cores, and then Anna easily talked me into joining her on the project, and came to Caltech for a second postdoc. I had the expertise in relativity, and she had a lot better understanding of the astrophysics of stars than I did. So it became a very enjoyable collaboration. For me it was a learning process. As one often does as a professor, I learned from working with a superb postdoc who had key knowledge and experience that I did not have.

What were the properties of TŻOs as you and Żytkow theorized them?

We didn't know in advance what they would look like, though we thought—correctly it turns out—that they would be red supergiants. Our calculations showed that if the star was heavier than about 11 suns, it would have a shell of burning material around the neutron core, a shell that would generate new elements as it burned. Convection, the circulation of hot gas inside the star, would reach right into the burning shell and carry the products of burning all the way to the surface of the star long before the burning was complete. This convection, reaching into a burning shell, was unlike anything seen in any other kind of star.

Is this how you get different elements in TŻOs than those ordinarily seen on the surface of a star?

That's right. We could see that the elements produced would be peculiar, but our calculations were not good enough to make this quantitative. In the 1990s, a graduate student of mine named Garrett Biehle (PhD '93) worked out, with considerable reliability, what the products of nuclear burning would be. He predicted unusually large amounts of rubidium and molybdenum; and a bit later Philipp Podsiadlowski, Robert Cannon, and Martin Rees at the University of Cambridge showed there would also be a lot of lithium.

It is excess rubidium, molybdenum, and lithium that Żytkow and her colleagues have found in HV 2112.

Does that mean TŻOs are fairly easy to recognize with a spectrographic analysis, which can determine the elements of a star?

No, it's not easy! TŻOs should have a unique signature, but these objects would be pretty rare.

What are the circumstances in which a TŻO would develop?

As far as we understand it, the most likely way these things form is that a neutron star cannibalizes the core of a companion star. You have a neutron star orbiting around a companion star, and they spiral together, and the neutron star takes up residence in the core of the companion. Bohdan Paczynski and Jerry Ostriker, an astrophysicist at Princeton University, speculated this would happen way back in 1975 while I was doing my original work with Żytkow, and subsequent analyses have confirmed it.

The other way a TŻO might develop is from the supernova explosion that makes the neutron star. In a supernova that creates a neutron star, matter is ejected in an asymmetric way. Occasionally these kicks resulting from the ejection of matter will drive the neutron star into the interior of the companion star, according to analyses by Peter Leonard and Jack Hills at Los Alamos, and Rachel Dewey at JPL.

Is there anything other than peculiar element abundances that would indicate a TŻO? Does it look different from other red supergiant stars?

TŻOs are the most highly luminous of red supergiant stars but not so much so that you could pick them out from the crowd: all red supergiants are very bright. I think the only way to identify them is through these element abundances.

Are you convinced that this star discovered by Żytkow and her colleagues is a TŻO?

The evidence that HV 2112 is a TŻO is strong but not ironclad. Certainly it's by far the best candidate for a TŻO that anyone has seen, but additional confirmation is needed.

How does it feel to hear that something you imagined on paper so long ago has been seen out in the universe?

It's certainly satisfying. It's an area of astrophysics that I dipped into briefly and then left. That's one of the lovely things about being a theorist: you can dip into a huge number of different areas. One of the things I've most enjoyed about my career is moving from one area to another and learning new astrophysics. Anna Żytkow deserves the lion's share of the credit for this finding. She pushed very hard on observers to get some good telescope time. It's her tenacity more than anything else that made this happen.

What are you working on now that you are retired?

I'm an executive producer of the film Interstellar, directed by Christopher Nolan and based in part on the science I've done during my Caltech career. Greater secrecy surrounds Interstellar than most any movie that's been done in Hollywood. I'm not allowed to talk about it, but let's just say that I've been spending a lot of my time on it in the last year. And I've recently finished writing a book about the science in Interstellar.

The other major project I'm wrapping up is a textbook that I've written with Roger Blandford [formerly a professor at Caltech; now on the faculty at Stanford]: Modern Classical Physics. It's based on a course that Roger or I taught every other year at Caltech from 1980 until my retirement in 2009. It covers fluid mechanics, elasticity, optics, statistical physics, plasma physics, and curved space-time—that is, everything in classical physics that any PhD physicist should be familiar with, but usually isn't. This week we delivered the manuscript to the copy editor. After 34 years of developing this monumental treatise/textbook, it's quite a relief.

I'm also working with some of my former students and postdocs on trying to understand the nonlinear dynamics of curved space-time. For this we gain insights from numerical relativity: simulations of the collisions of spinning black holes. But I've had to shelve this work for the past half year due to the pressures of the movie and books. I hope to return to it soon.

Writer: 
Cynthia Eller
Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Watching Nanoscale Fluids Flow

At the nanoscale, where objects are measured in billionths of meters and events transpire in trillionths of seconds, things do not always behave as our experiences with the macro-world might lead us to expect. Water, for example, seems to flow much faster within carbon nanotubes than classical physics says should be possible. Now imagine trying to capture movies of these almost imperceptibly small nanoscale movements.

Researchers at Caltech now have done just that by applying a new imaging technique called four-dimensional (4D) electron microscopy to the nanofluid dynamics problem. In a paper appearing in the June 27 issue of Science, Ahmed Zewail, the Linus Pauling Professor of Chemistry and professor of physics, and Ulrich Lorenz, a postdoctoral scholar in chemistry, describe how they visualized and monitored the flow of molten lead within a single zinc oxide nanotube in real time and space.

The 4D microscopy technique was developed in the Physical Biology Center for Ultrafast Science and Technology at Caltech, created and directed by Zewail to advance understanding of the fundamental physics of chemical and biological behavior. 

In 4D microscopy, a stream of ultra-fast-moving electrons bombards a sample in a carefully timed manner. Each electron scatters off the sample, producing a still image that represents a single moment, just a femtosecond—or a millionth of a billionth of a second—in duration. Millions of the still images can then be stitched together to produce a digital movie of nanoscale motion.

In the new work, Lorenz and Zewail used single laser pulses to melt the lead cores of individual zinc oxide nanotubes and then, using 4D microscopy, captured how the hot pressurized liquid moved within the tubes—sometimes splitting into multiple segments, producing tiny droplets on the outside of the tube, or causing the tubes to break. Lorenz and Zewail also measured the friction experienced by the liquid in the nanotube.

"These observations are particularly significant because visualizing the behavior of fluids at the nanoscale is essential to our understanding of how materials and biological channels effectively transport liquids," says Zewail. In 1999, Zewail won the Nobel Prize for his development of femtosecond chemistry.

The paper is titled "Observing liquid flow in nanotubes by 4D electron microscopy." The work was supported by the National Science Foundation, the Air Force Office of Scientific Research, and the Gordon and Betty Moore Foundation. Lorenz was partially supported by a fellowship from the Swiss National Science Foundation.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech-Led Team Develops a Geothermometer for Methane Formation

Methane is a simple molecule consisting of just one carbon atom bound to four hydrogen atoms. But that simplicity belies the complex role the molecule plays on Earth—it is an important greenhouse gas, is chemically active in the atmosphere, is used in many ecosystems as a kind of metabolic currency, and is the main component of natural gas, which is an energy source.

Methane also poses a complex scientific challenge: it forms through a number of different biological and nonbiological processes under a wide range of conditions. For example, microbes that live in cows' stomachs make it; it forms by thermal breakdown of buried organic matter; and it is released by hot hydrothermal vents on the sea floor. And, unlike many other, more structurally complex molecules, simply knowing its chemical formula does not necessarily reveal how it formed. Therefore, it can be difficult to know where a sample of methane actually came from.

But now a team of scientists led by Caltech geochemist John M. Eiler has developed a new technique that can, for the first time, determine the temperature at which a natural methane sample formed. Since methane produced biologically in nature forms below about 80°C, and methane created through the thermal breakdown of more complex organic matter forms at higher temperatures (reaching 160°C–200°C, depending on the depth of formation), this determination can aid in figuring out how and where the gas formed.

A paper describing the new technique and its first applications as a geothermometer appears in a special section about natural gas in the current issue of the journal Science. Former Caltech graduate student Daniel A. Stolper (PhD '14) is the lead author on the paper.

"Everyone who looks at methane sees problems, sees questions, and all of these will be answered through basic understanding of its formation, its storage, its chemical pathways," says Eiler, the Robert P. Sharp Professor of Geology and professor of geochemistry at Caltech.

"The issue with many natural gas deposits is that where you find them—where you go into the ground and drill for the methane—is not where the gas was created. Many of the gases we're dealing with have moved," says Stolper. "In making these measurements of temperature, we are able to really, for the first time, say in an independent way, 'We know the temperature, and thus the environment where this methane was formed.'"

Eiler's group determines the sources and formation conditions of materials by looking at the distribution of heavy isotopes—species of atoms that have extra neutrons in their nuclei and therefore have different chemistry. For example, the most abundant form of carbon is carbon-12, which has six protons and six neutrons in its nucleus. However, about 1 percent of all carbon possesses an extra neutron, which makes carbon-13. Chemicals compete for these heavy isotopes because they slow molecular motions, making molecules more stable. But these isotopes are also very rare, so there is a chemical tug-of-war between molecules, which ends up concentrating the isotopes in the molecules that benefit most from their stabilizing effects. Similarly, the heavy isotopes like to bind, or "clump," with each other, meaning that there will be an excess of molecules containing two or more of the isotopes compared to molecules containing just one. This clumping effect is strong at low temperatures and diminishes at higher temperatures. Therefore, determining how many of the molecules in a sample contain heavy isotopes clumped together can tell you something about the temperature at which the sample formed.

Eiler's group has previously used such a "clumped isotope" technique to determine the body temperatures of dinosaurs, ground temperatures in ancient East Africa, and surface temperatures of early Mars. Those analyses looked at the clumping of carbon-13 and oxygen-18 in various minerals. In the new work, Eiler and his colleagues were able to examine the clumping of carbon-13 and deuterium (hydrogen-2).

The key enabling technology was a new mass spectrometer that the team designed in collaboration with Thermo Fisher, mixing and matching existing technologies to piece together a new platform. The prototype spectrometer, the Thermo IRMS 253 Ultra, is equipped to analyze samples in a way that measures the abundances of several rare versions, or isotopologues, of the methane molecule, including two "clumped isotope" species: 13CH3D, which has both a carbon-13 atom and a deuterium atom, and 12CH2D2, which includes two deuterium atoms.

Using the new spectrometer, the researchers first tested gases they made in the laboratory to make sure the method returned the correct formation temperatures.

They then moved on to analyze samples taken from environments where much is known about the conditions under which methane likely formed. For example, sometimes when methane forms in shale, an impermeable rock, it is trapped and stored, so that it cannot migrate from its point of origin. In such cases, detailed knowledge of the temperature history of the rock constrains the possible formation temperature of methane in that rock. Eiler and Stolper analyzed samples of methane from the Haynesville Shale, located in parts of Arkansas, Texas, and Louisiana, where the shale is not thought to have moved much after methane generation. And indeed, the clumped isotope technique returned a range of temperatures (169°C–207°C) that correspond well with current reservoir temperatures (163°C–190°C). The method was also spot-on for methane collected from gas that formed as a product of oil-eating bugs living on top of oil reserves in the Gulf of Mexico. It returned temperatures of 34°C and 48°C plus or minus 8°C for those samples, and the known temperatures of the sampling locations were 42°C and 48°C, respectively.

To validate further the new technique, the researchers next looked at methane from the Marcellus Shale, a formation beneath much of the Appalachian basin, where the gas-trapping rock is known to have formed at high temperature before being uplifted into a cooler environment. The scientists wanted to be sure that the methane did not reset to the colder temperature after formation. Using their clumped isotope technique, the researchers verified this, returning a high formation temperature.

"It must be that once the methane exists and is stable, it's a fossil remnant of what its formation environment was like," Eiler says. "It only remembers where it formed."

An important application of the technique is suggested by the group's measurements of methane from the Antrim Shale in Michigan, where groundwater contains both biologically and thermally produced methane. Clumped isotope temperatures returned for samples from the area clearly revealed the different origins of the gases, hitting about 40°C for a biologically produced sample and about 115°C for a sample involving a mix of biologically and thermally produced methane.

"There are many cases where it is unclear whether methane in a sample of groundwater is the product of subsurface biological communities or has leaked from petroleum-forming systems," says Eiler. "Our results from the Antrim Shale indicate that this clumped isotope technique will be useful for distinguishing between these possible sources."

One final example, from the Potiguar Basin in Brazil, demonstrates another way the new method will serve geologists. In this case the methane was dissolved in oil and had been free to migrate from its original location. The researchers initially thought there was a problem with their analysis because the temperature they returned was much higher than the known temperature of the oil. However, recent evidence from drill core rocks from the region shows that the deepest parts of the system actually got very hot millions of years ago. This has led to a new interpretation suggesting that the methane gas originated deep in the system at high temperatures and then percolated up and mixed into the oil.

"This shows that our new technique is not just a geothermometer for methane formation," says Stolper. "It's also something you can use to think about the geology of the system."

The paper is titled "Formation temperatures of thermogenic and biogenic methane." Along with Eiler and Stolper, additional coauthors are Alex L. Sessions, professor of geobiology at Caltech; Michael Lawson and Cara L. Davis of ExxonMobil Upstream Research Company; Alexandre A. Ferreira and Eugenio V. Santos Neto of Petrobas Research and Development Center; Geoffrey S. Ellis and Michael D. Lewan of the U.S. Geological Survey in Denver; Anna M. Martini of Amherst College; Yongchun Tang of the Power, Environmental, and Energy Research Institute in Covina, California; and Martin Schoell of GasConsult International Inc. in Berkeley, California. The work was supported by the National Science Foundation, Petrobras, and ExxonMobil.

Writer: 
Kimm Fesenmaier
Listing Title: 
Geothermometer for Methane Formation
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Growing Unknown Microbes One by One

A new technique developed at Caltech helps grow individual species of the unknown microbes that live in the human body.

Trillions of bacteria live in and on the human body; a few species can make us sick, but many others keep us healthy by boosting digestion and preventing inflammation. Although there's plenty of evidence that these microbes play a collective role in human health, we still know very little about most of the individual bacterial species that make up these communities. Employing the use of a specially designed glass chip with tiny compartments, Caltech researchers now provide a way to target and grow specific microbes from the human gut—a key step in understanding which bacteria are helpful to human health and which are harmful.

The work was published the week of June 23 in the Proceedings of the National Academy of Sciences.

Although a few bacterial species are easy to grow in the laboratory, needing only a warm environment and plenty of food to multiply, most species that grow in and on the human body have never been successfully grown in lab conditions. It's difficult to recreate the complexity of the microbiome—the entire human microbial community—in one small plate (a lidded dish with nutrients used to grow microbes), says Rustem Ismagilov, Ethel Wilson Bowles and Robert Bowles Professor of Chemistry and Chemical Engineering at Caltech.

There are thousands of species of microbes in one sample from the human gut, Ismagilov says, "but when you grow them all together in the lab, the faster-growing bacteria will take over the plate and the slow-growing ones don't have a chance—leading to very little diversity in the grown sample." Finding slow-growing microbes of interest is like finding a needle in a haystack, he says, but his group wanted to work out a way to "just grow the needle without growing the hay."

To do this, Liang Ma, a postdoctoral scholar in Ismagilov's lab, developed a way to isolate and cultivate individual bacterial species of interest. He and his colleagues began by looking for bacterial species that contained a set of specific genetic sequences. The targeted gene sequences belong to organisms on the list of "Most Wanted" microbes—a list developed by the National Institutes of Health (NIH) Human Microbiome Project. The microbes carrying these genetic sequences are found abundantly in and on the human body, but have been difficult to grow in the lab.

To grow these elusive microbes, the Caltech researchers turned to SlipChip, a microfluidic device previously developed in Ismagilov's lab. SlipChip is made up of two glass slides, each the size of a credit card, that have tiny etched grooves which become channels when the grooved surfaces are stacked atop one another. When a sample—say, a jumbled-up assortment of bacteria species collected from a colonoscopy biopsy—is added to the interconnected channels of the SlipChip, a single "slip" of the top chip will turn the channels into individual wells, with each well ideally holding a single microbe. Once sequestered in an isolated well, each individual bacterium can divide and grow without having to compete for resources with other types of faster-growing microbes.

The researchers then needed to determine which compartment of the SlipChip contained a colony of the target bacterium—which is not a simple task, says Ismagilov. "It's a Catch-22—you have to kill the organism in order to find its DNA sequence and figure out what it is, but you want a live organism at the end of the day, so that you can grow and study this new microbe," he says. "Liang solves this in a really clever way; he grows a compartment full of his target microbe in the SlipChip, then he splits the compartment in half. One half contains the live organism and the other half is sacrificed for its DNA to confirm that the sequence is that of the target microbe."

The method of creating two halves in each well in the SlipChip will be published separately in an upcoming issue of the journal Integrative Biology.

To validate the new methodology, the researchers isolated one specific bacterium from the Human Microbiome Project's "Most Wanted" list. The investigators used the SlipChip to grow this bacterium in a tiny volume of the washing fluid that was used to collect the gut bacteria sample from a volunteer. Since bacteria often depend on nutrients and signals from the extracellular environment to support growth, the substances from this fluid were used to recreate this environment within the tiny SlipChip compartment—a key to successfully growing the difficult organism in the lab.

After growing a pure culture of the previously unidentified bacterium, Ismagilov and his colleagues obtained enough genetic material to sequence a high-quality draft genome of the organism. Although a genomic sequence of the new organism is a useful tool, further studies are needed to learn how this species of microbe is involved in human health, Ismagilov says.

In the future, the new SlipChip technique may be used to isolate additional previously uncultured microbes, allowing researchers to focus their efforts on important targets, such as those that may be relevant to energy applications and the production of probiotics. The technique, says Ismagilov, allows researchers to target specific microbes in a way that was not previously possible.

The paper is titled "Gene-targeted microfluidic cultivation validated by isolation of a gut bacterium listed in Human Microbiome Project's Most Wanted taxa." In addition to Liang and Ismagilov, other coauthors include, from Caltech, associate scientist Mikhail A. Karymov, graduate student Jungwoo Kim, and postdoctoral scholar Roland Hatzenpichler, and, from the University of Chicago department of medicine, Nathanial Hubert, Ira M. Hanan, and Eugene B. Chang. The work was funded by NIH's National Human Genome Research Institute. Microfluidic technologies developed by Ismagilov's group have been licensed to Emerald BioStructures, RanDance Technologies, and SlipChip Corporation, of which Ismagilov is a cofounder.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Earth-Building Bridgmanite

Our planet's most abundant mineral now has a name

Deep below the earth's surface lies a thick, rocky layer called the mantle, which makes up the majority of our planet's volume. For decades, scientists have known that most of the lower mantle is a silicate mineral with a perovskite structure that is stable under the high-pressure and high-temperature conditions found in this region. Although synthetic examples of this composition have been well studied, no naturally occurring samples had ever been found in a rock on the earth's surface. Thanks to the work of two scientists, naturally occurring silicate perovskite has been found in a meteorite, making it eligible for a formal mineral name.

The mineral, dubbed bridgmanite, is named in honor of Percy Bridgman, a physicist who won the 1946 Nobel Prize in Physics for his fundamental contributions to high-pressure physics.

"The most abundant mineral of the earth now has an official name," says Chi Ma, a mineralogist and director of the Geological and Planetary Sciences division's Analytical Facility at Caltech.

"This finding fills a vexing gap in the taxonomy of minerals," adds Oliver Tschauner, an associate research professor at the University of Nevada-Las Vegas who identified the mineral together with Ma.

High-pressure and temperature experiments, as well as seismic data, strongly suggest that (Mg,Fe)SiO3-perovskite—now simply called bridgmanite—is the dominant material in the lower mantle. But since it is impossible to get to the earth's lower mantle, located some 400 miles deep within the planet, and rocks brought to the earth's surface from the lower mantle are exceedingly rare, naturally occurring examples of this material had never been fully described.

That is until Ma and Tschauner began poking around a sample from the Tenham meteorite, a space rock that fell in Australia in 1879.

Because the 4.5 billion-year-old meteorite had survived high-energy collisions with asteroids in space, parts of it were believed to have experienced the high-pressure conditions we see in the earth's mantle. That, scientists thought, made it a good candidate for containing bridgmanite.

Tschauner used synchrotron X-ray diffraction mapping to find indications of the mineral in the meteorite. Ma then examined the mineral and its surroundings with a high-resolution scanning electron microscope and determined the composition of the tiny bridgmanite crystals using an electron microprobe. Next, Tschauner analyzed the crystal structure by synchrotron diffraction. After five years and multiple experiments, the two were finally able to gather enough data to reveal bridgmanite's chemical composition and crystal structure.

"It is a really cool discovery," says Ma. "Our finding of natural bridgmanite not only provides new information on shock conditions and impact processes on small bodies in the solar system, but the tiny bridgmanite found in a meteorite could also help investigations of phase transformation mechanisms in the deep Earth. "

The mineral and the mineral name were approved on June 2 by the International Mineralogical Association's Commission on New Minerals, Nomenclature and Classification. 

The researchers' findings are published in the November 28 issue of Science, in an article titled "Discovery of Bridgmanite, the Most Abundant Mineral in Earth, In a Shocked Meteorite."

Writer: 
Katie Neith
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Surprising Results from Game Theory Studies

If you're trying to outwit the competition, it might be better to have been born a chimpanzee, according to a study by researchers at Caltech, which found that chimps at the Kyoto University Primate Research Institute consistently outperform humans in simple contests drawn from game theory.

The study, led by Colin Camerer, Robert Kirby Professor of Behavioral Economics, and appearing on June 5 in the online publication Scientific Reports, involved a simple game of hide-and-seek that researchers call the Inspection Game. In the game, two players (either a pair of chimps or a pair of humans) are set up back to back, each facing a computer screen. To start the game, each player pushes a circle on the monitor and then selects one of two blue boxes on the left or right side of the screen. After both players have chosen left or right, the computer shows each player her opponent's choice. This continues through 200 iterations per game. The goal of the players in the "hiding" role—the "mismatchers"—is to choose the opposite of their opponent's selection. Players in the "seeking" role—the "matchers"—win if they make the same choices as their opponent. Winning players receive a reward: a chunk of apple for the chimps or a small coin for the humans. If players are to win repeatedly, they have to accurately predict what their opponent will do next, anticipating their strategy.

The game, though simple, replicates a situation that is common in the everyday lives of both chimps and humans. Study coauthor Peter Bossaerts, a visiting associate in finance at Caltech, gives an example from human life: an employee who wants to work only when her employer is watching and prefers to play video games when unobserved. To better conceal her secret video game obsession, the employee must learn the patterns of the employer's behavior—when they might or might not be around to check up on the worker. Employers who suspect their employees are up to no good, however, need to be unpredictable, popping in randomly to see what the staff is doing on company time.

The Inspection Game not only models such situations, it also provides methods to quantify behavioral choices. "The nice thing about the game theory used in this study is that it allows you to boil down all of these situations to their strategic essence," explains Caltech graduate student and coauthor Rahul Bhui.

However cleverly you play the Inspection Game, if your opponent is also playing strategically, there is a limit to how often you can win. That limit, many game theorists agree, is best described by the Nash equilibrium, named for mathematician John Forbes Nash Jr., winner of the 1994 Nobel Memorial Prize in Economic Sciences, whose life and career provided the inspiration for the Academy Award–winning 2001 film A Beautiful Mind.

In the first part of this study, coauthors Chris Martin and Tetsuro Matsuzawa compared the game play of six common chimpanzees (Pan troglodytes) and 16 Japanese students, always facing off against their own species, in the Kyoto research facility. The humans behaved as expected based on previous experiments; that is, they played reasonably well, slowly learning to predict opponent choices, but they did not play optimally. They ended up somewhat off the Nash equilibrium.

The performance of the chimps was far more impressive: they learned the game rapidly and nearly attained the predictions of the Nash theorem for optimal play. They continued to do so even as researchers introduced changes into the game, first by having players switch roles—matchers (seekers) becoming mismatchers (hiders), and vice versa—and then by adjusting the payoffs such that matchers received greater rewards when matching on one side of the screen (left or right) rather than the other. This latter adjustment changes the Nash equilibrium for the game, and the chimps changed right along with it.

In a second phase of the experiment in Bossou, Guinea, 12 adult men were asked to face one another in pairs. Instead of touching dots on a computer screen on the left or right, the men in Bossou each had a bottle cap that they placed top up or top down. As in the Kyoto experiments, one player in each pair was a mismatcher (hider) and the other was a matcher (seeker). However, the stakes were much higher in Bossou, amounting to about one full day's earnings for the winner, as opposed to the rewards for the Japanese students, who received a handful of one yen coins. Still, the players in Bossou did not match chimpanzee performance, landing as far off the Nash equilibrium as the Japanese students did.

A couple of simple explanations could account for the ability of these chimpanzees to outperform humans in the game. First, these particular chimps had more extensive training at this kind of task as well as more experience with the equipment used at the Research Institute than the human subjects did. Second, the chimps in Kyoto were related to one another—they played in mother-child pairs—and thus may have had intimate knowledge, borne of long acquaintance, of the sequence of choices their opponents would probably make.

Neither explanation seems likely, researchers say. Although the Japanese students may not have had experience with the type of touch screens employed in the Kyoto facility, they certainly had encountered video games and touch screens prior to the experiment. Meanwhile, the players in Bossou knew each other very well prior to the experiments and had the additional advantage of seeing one another while they played, yet they performed no better than the Japanese students.

Superior chimpanzee performance could be due to excellent short-term memory, a particular strength in chimps. This has been shown in other experiments undertaken at the Kyoto facility. In one game, a sequence of numbers is briefly flashed on the computer touch screen, and then the numbers quickly revert to white squares. Players must tap the squares in the sequence corresponding to the numbers they were initially shown. Chimpanzees are brilliant at this task, as video from the experiment shows; humans find it much more challenging, as seen in video from the Primate Research Center.

But before we join a species-specific pity party over our inferior brains, rest assured that researchers offer other explanations for chimpanzee superiority at the Inspection Game. There are two possible explanations that researchers currently find plausible. The first has to do with the roles of competition and cooperation in chimpanzee versus human societies; the second with the differential evolution of human and chimpanzee brains since our evolutionary paths split between 4 and 5 million years ago.

The past half-century has seen an enormous divergence of opinion as to how cooperative or competitive humans "naturally" are, and though this debate is far from settled, it is clear that wherever humans sit on the cooperative/competitive scale, common chimpanzees are more competitive with one another than we are. They create and continuously update a strong status and dominance hierarchy. (Another type of chimpanzee, Pan paniscus, or the bonobo, is considerably more cooperative than Pan troglodytes, but the former has not been studied as extensively as the latter.) Humans, in contrast, are highly prosocial and cooperative. Camerer notes that this difference is apparent in chimp and human social development. "While young chimpanzees hone their competitive skills with constant practice, playing hide-and-seek and wrestling, " says Camerer, "their human counterparts shift at a young age from competition to cooperation using our special skill at language."

Language is probably a key factor here. In the Inspection Game experiments, humans were not allowed to speak with one another, despite language being "key to human strategic interaction," according to Martin.

Language is also implicated in the "cognitive tradeoff hypothesis," the second explanation for the chimps' superior performance in the Inspection Game. According to this hypothesis, developed by Matsuzawa, the brain growth and specialization that led to distinctly human cognitive capacities such as language and categorization also caused us to process certain simpler competitive situations—like the Inspection Game—more abstractly and less automatically than our chimpanzee cousins.

These explanations remain speculative, but eventually, Bhui predicts, new technologies will make it possible to "map out the set of brain circuits that humans and chimps rely upon so we can discover whether or not human strategic choices go down a longer pathway or get diffused into different parts of the brain compared to chimps."

Funding for this experiment, described in a paper entitled "Chimpanzee choice rates in competitive games match equilibrium predictions," was provided by the Ministry of Education, Culture, Sports, Science and Technology in Japan; the Gordon and Betty Moore Foundation; the Social Sciences and Humanities Research Council of Canada; and Caltech's Division of the Humanities and Social Sciences.

Writer: 
Cynthia Eller
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

JCAP Stabilizes Common Semiconductors For Solar Fuels Generation

Caltech researchers devise a method to protect the materials in a solar-fuel generator

Researchers around the world are trying to develop solar-driven generators that can split water, yielding hydrogen gas that could be used as clean fuel. Such a device requires efficient light-absorbing materials that attract and hold sunlight to drive the chemical reactions involved in water splitting. Semiconductors like silicon and gallium arsenide are excellent light absorbers—as is clear from their widespread use in solar panels. However, these materials rust when submerged in the type of water solutions found in such systems.

Now Caltech researchers at the Joint Center for Artificial Photosynthesis (JCAP) have devised a method for protecting these common semiconductors from corrosion even as the materials continue to absorb light efficiently. The finding paves the way for the use of these materials in solar-fuel generators.

"For the better part of a half century, these materials have been considered off the table for this kind of use," says Nate Lewis, the George L. Argyros Professor and professor of chemistry at Caltech, and the principal investigator on the paper. "But we didn't give up on developing schemes by which we could protect them, and now these technologically important semiconductors are back on the table."

The research, led by Shu Hu, a postdoctoral scholar in chemistry at Caltech, appears in the May 30 issue of the journal Science.

In the type of integrated solar-fuel generator that JCAP is striving to produce, two half-reactions must take place—one involving the oxidation of water to produce oxygen gas; the other involving the reduction of water, yielding hydrogen gas. Each half-reaction requires both a light-absorbing material to serve as the photoelectrode and a catalyst to drive the chemistry. In addition, the two reactions must be physically separated by a barrier to avoid producing an explosive mixture of their products.

Historically, it has been particularly difficult to come up with a light-absorbing material that will robustly carry out the oxidation half-reaction. Researchers have tried, without much success, a variety of materials and numerous techniques for coating the common light-absorbing semiconductors. The problem has been that if the protective layer is too thin, the aqueous solution penetrates through and corrodes the semiconductor. If, on the other hand, the layer is too thick, it prevents corrosion but also blocks the semiconductor from absorbing light and keeps electrons from passing through to reach the catalyst that drives the reaction.

At Caltech, the researchers used a process called atomic layer deposition to form a layer of titanium dioxide (TiO2)—a material found in white paint and many toothpastes and sunscreens—on single crystals of silicon, gallium arsenide, or gallium phosphide. The key was that they used a form of TiO2 known as "leaky TiO2"—because it leaks electricity. First made in the 1990s as a material that might be useful for building computer chips, leaky oxides were rejected as undesirable because of their charge-leaking behavior. However, leaky TiO2 seems to be just what was needed for this solar-fuel generator application. Deposited as a film, ranging in thickness between 4 and 143 nanometers, the TiO2 remained optically transparent on the semiconductor crystals—allowing them to absorb light—and protected them from corrosion but allowed electrons to pass through with minimal resistance.

On top of the TiO2, the researchers deposited 100-nanometer-thick "islands" of an abundant, inexpensive nickel oxide material that successfully catalyzed the oxidation of water to form molecular oxygen.

The work appears to now make a slew of choices available as possible light-absorbing materials for the oxidation side of the water-splitting equation. However, the researchers emphasize, it is not yet known whether the protective coating would work as well if applied using an inexpensive, less-controlled application technique, such as painting or spraying the TiO2 onto a semiconductor. Also, thus far, the Caltech team has only tested the coated semiconductors for a few hundred hours of continuous illumination.

"This is already a record in terms of both efficiency and stability for this field, but we don't yet know whether the system fails over the long term and are trying to ensure that we make something that will last for years over large areas, as opposed to weeks," says Lewis. "That's the next step."

The work, titled "Amorphous TiO2 Coatings Stabilize Si, GaAs, and GaP Photoanodes for Efficient Water Oxidation," was supported by the Office of Science of the U.S. Department of Energy through an award to JCAP, a DOE Energy Innovation Hub. Some of the work was also supported by the Resnick Sustainability Institute and the Beckman Institute at Caltech. Additional coauthors on the paper are graduate students Matthew Shaner, Joseph Beardslee, and Michael Lichterman, as well as Bruce S. Brunschwig, director of the Molecular Materials Resource Center at Caltech.

Writer: 
Kimm Fesenmaier
Listing Title: 
Stabilizing Semiconductors for Solar Fuels Generation
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Miniature Truss Work

Fancy Erector Set? Nope. The elaborate fractal structure shown at right (with a close-up below) is many, many times smaller than that and is certainly not child's play. It is the latest example of what Julia Greer, professor of materials science and mechanics, calls a fractal nanotruss—nano because the structures are made up of members that are as thin as five nanometers (five billionths of a meter); truss because they are carefully architected structures that might one day be used in structural engineering materials.

Greer's group has developed a three-step process for building such complex structures very precisely. They first use a direct laser writing method called two-photon lithography to "write" a three-dimensional pattern in a polymer, allowing a laser beam to crosslink and harden the polymer wherever it is focused. At the end of the patterning step, the parts of the polymer that were exposed to the laser remain intact while the rest is dissolved away, revealing a three-dimensional scaffold. Next, the scientists coat the polymer scaffold with a continuous, very thin layer of a material—it can be a ceramic, metal, metallic glass, semiconductor, "just about anything," Greer says. In this case, they used alumina, or aluminum oxide, which is a brittle ceramic, to coat the scaffold. In the final step they etch out the polymer from within the structure, leaving a hollow architecture.

Taking advantage of some of the size effects that many materials display at the nanoscale, these nanotrusses can have unusual, desirable qualities. For example, intrinsically brittle materials, like ceramics, including the alumina shown, can be made deformable so that they can be crushed and still rebound to their original state without global failure.

"Having full control over the architecture gives us the ability to tune material properties to what was previously unattainable with conventional monolithic materials or with foams," says Greer. "For example, we can decouple strength from density and make materials that are both strong (and tough) as well as extremely lightweight. These structures can contain nearly 99 percent air yet can also be as strong as steel. Designing them into fractals allows us to incorporate hierarchical design into material architecture, which promises to have further beneficial properties."

The members of Greer's group who helped develop the new fabrication process and created these nanotrusses are graduate students Lucas Meza and Lauren Montemayor and Nigel Clarke, an undergraduate intern from the University of Waterloo.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Supernova Caught in the Act by Palomar Transient Factory

Supernovae—stellar explosions—are incredibly energetic, dynamic events. It is easy to imagine that they are uncommon, but the universe is a big place and supernovae are actually fairly routine. The problem with observing supernovae is knowing just when and where one is occurring and being able to point a world-class telescope at it in the hours immediately afterward, when precious data about the supernova's progenitor star is available. Fortunately the intermediate Palomar Transient Factory (iPTF) operated by Caltech scans the sky constantly in search of dramatic astrophysical events. In 2013, it caught a star in the act of exploding.

The iPTF is a robotic observing system mounted on the 48-inch Samuel Oschin Telescope on Palomar Mountain. It has been scanning the sky since February 2013. The iPTF (and its predecessor experiment, the Palomar Transient Factory [PTF], which operated between 2009 and 2012) regularly observes a wide swath of the night sky looking for astronomical objects that are moving and developing quickly, such as comets, asteroids, gamma-ray bursts, and supernovae. Both the earlier PTF and the current iPTF collaborations are led by Shrinivas Kulkarni, the John D. and Catherine T. MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories.

Last year the iPTF discovered an object of special interest: a supernova with a spectral signature suggesting that its progenitor star was a Wolf-Rayet star. Massive stars are typically structured like an onion, with the heaviest elements in the core, while lighter elements are layered over them and then frosted, if you will, by a layer of hydrogen gas on the stellar surface. Wolf-Rayet stars, which are unusually large and hot, are exceptions to this rule, being relatively deficient in hydrogen and characterized by strong stellar winds. Astronomers have long wondered if Wolf-Rayet stars are the progenitors of certain types of supernovae, and according to a recent paper published in Nature this is just what the iPTF found in May 2013.

This supernova, SN2013cu, was picked up on a routine sky scan by the iPTF. The on-duty iPTF team member in Israel promptly sounded an alert, asking colleagues at the W. M. Keck Observatory on Mauna Kea to take a spectral image of the supernova before the sun rose in Hawaii.

When supernovae explode, they briefly ionize the sky immediately around them. The ionized materials rapidly recombine, producing unique spectral features that enable astronomers to get a full picture of the ambient material of a supernova event. This process lasts from minutes to a few days and hence is called a "flash spectrum" of the event. Flash spectrography is a novel observational method developed by Avishay Gal-Yam of the Weizmann Institute of Science in Israel, leader of the team that published the Nature paper.

In the case of SN2013cu, the flash spectrum showed relatively less hydrogen and relatively more nitrogen, suggesting that perhaps the progenitor of the supernova was a nitrogen-rich Wolf-Rayet star. This finding will enable astronomers to better understand the evolution of massive stars and identify potential progenitors of supernovae.

"I could not believe my eyes when I saw those high-ionization features perfectly matching emission lines from a Wolf-Rayet star," says Yi Cao, a graduate student from Caltech who works with Kulkarni. "Our software pipeline efforts were paying off. Now we are working even harder so that we can get flash spectra of many more supernova flavors to probe their progenitor stars."

Above all, the observation of SN2013cu highlights the success of the intermediate Palomar Transient Factory at catching the universe in the act of doing something interesting, something that might merit a second look. Though especially intriguing, SN2013cu is only one of over 2,000 supernovae that PTF/iPTF has detected during its four and a half years of observations. As Kulkarni remarks, "I am proud of how the global iPTF network is working together to invent new techniques enabling entirely new science."

The iPTF is a collaboration between Caltech, Los Alamos National Laboratory, the University of Wisconsin–Milwaukee, the Oskar Klein Centre, the Weizmann Institute of Science, the TANGO Program of the University System of Taiwan, and the Kavli Institute for the Physics and Mathematics of the Universe.

Coauthors on the paper, "A Wolf-Rayet-like progenitor of supernova SN 2013cu from spectral observations of a wind," include Kulkarni, Cao, Mansi Kasliwal, Daniel Perley, and Assaf Horesh of Caltech; Gal-Yam, I. Arcavi, E. O. Ofek, S. Ben-Ami, A. De Cia, D. Tal, P. M. Vreeswijk, and O. Yaron of the Weizmann Institute of Science; S. B. Cenko of NASA's Goddard Space Flight Center; J. C. Wheeler and J. M. Silverman of the University of Texas at Austin; F. Taddia and J. Sollerman of Stockholm University; P. E. Nugent of the Lawrence Berkeley National Laboratory; and A. V. Filippenko of UC Berkeley.

Writer: 
Cynthia Eller
Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Tricking the Uncertainty Principle

Caltech researchers have found a way to make measurements that go beyond the limits imposed by quantum physics.

Today, we are capable of measuring the position of an object with unprecedented accuracy, but quantum physics and the Heisenberg uncertainty principle place fundamental limits on our ability to measure. Noise that arises as a result of the quantum nature of the fields used to make those measurements imposes what is called the "standard quantum limit." This same limit influences both the ultrasensitive measurements in nanoscale devices and the kilometer-scale gravitational wave detector at LIGO. Because of this troublesome background noise, we can never know an object's exact location, but a recent study provides a solution for rerouting some of that noise away from the measurement.

The findings were published online in the May 15 issue of Science Express.

"If you want to know where something is, you have to scatter something off of it," explains Professor of Applied Physics Keith Schwab, who led the study. "For example, if you shine light at an object, the photons that scatter off provide information about the object. But the photons don't all hit and scatter at the same time, and the random pattern of scattering creates quantum fluctuations"—that is, noise. "If you shine more light, you have increased sensitivity, but you also have more noise. Here we were looking for a way to beat the uncertainty principle—to increase sensitivity but not noise."

Schwab and his colleagues began by developing a way to actually detect the noise produced during the scattering of microwaves—electromagnetic radiation that has a wavelength longer than that of visible light. To do this, they delivered microwaves of a specific frequency to a superconducting electronic circuit, or resonator, that vibrates at 5 gigahertz—or 5 billion times per second. The electronic circuit was then coupled to a mechanical device formed of two metal plates that vibrate at around 4 megahertz—or 4 million times per second. The researchers observed that the quantum noise of the microwave field, due to the impact of individual photons, made the mechanical device shake randomly with an amplitude of 10-15 meters, about the diameter of a proton.

"Our mechanical device is a tiny square of aluminum—only 40 microns long, or about the diameter of a hair. We think of quantum mechanics as a good description for the behaviors of atoms and electrons and protons and all of that, but normally you don't think of these sorts of quantum effects manifesting themselves on somewhat macroscopic objects," Schwab says. "This is a physical manifestation of the uncertainty principle, seen in single photons impacting a somewhat macroscopic thing."

Once the researchers had a reliable mechanism for detecting the forces generated by the quantum fluctuations of microwaves on a macroscopic object, they could modify their electronic resonator, mechanical device, and mathematical approach to exclude the noise of the position and motion of the vibrating metal plates from their measurement.

The experiment shows that a) the noise is present and can be picked up by a detector, and b) it can be pushed to someplace that won't affect the measurement. "It's a way of tricking the uncertainty principle so that you can dial up the sensitivity of a detector without increasing the noise," Schwab says.

Although this experiment is mostly a fundamental exploration of the quantum nature of microwaves in mechanical devices, Schwab says that this line of research could one day lead to the observation of quantum mechanical effects in much larger mechanical structures. And that, he notes, could allow the demonstration of strange quantum mechanical properties like superposition and entanglement in large objects—for example, allowing a macroscopic object to exist in two places at once.

"Subatomic particles act in quantum ways—they have a wave-like nature—and so can atoms, and so can whole molecules since they're collections of atoms," Schwab says. "So the question then is: Can you make bigger and bigger objects behave in these weird wave-like ways? Why not? Right now we're just trying to figure out where the boundary of quantum physics is, but you never know."

This work was published in an article titled "Mechanically Detecting and Avoiding the Quantum Fluctuations of a Microwave Field." Other Caltech coauthors include senior researcher Junho Suh; graduate students Aaron J. Weinstein, Chan U. Lei, and Emma E. Wollman; and Steven K. Steinke, visitor in applied physics and materials science. The work was funded by the Institute for Quantum Information and Matter, the Defense Advanced Research Projects Agency, and the National Science Foundation. The device was fabricated in Caltech's Kavli Nanoscience Institute, of which Schwab is a codirector.

Contact: 
Writer: 
Exclude from News Hub: 
No

Pages

Subscribe to RSS - research_news