New Research Suggests Solar System May Have Once Harbored Super-Earths

Caltech and UC Santa Cruz Researchers Say Earth Belongs to a Second Generation of Planets

Long before Mercury, Venus, Earth, and Mars formed, it seems that the inner solar system may have harbored a number of super-Earths—planets larger than Earth but smaller than Neptune. If so, those planets are long gone—broken up and fallen into the sun billions of years ago largely due to a great inward-and-then-outward journey that Jupiter made early in the solar system's history.

This possible scenario has been suggested by Konstantin Batygin, a Caltech planetary scientist, and Gregory Laughlin of UC Santa Cruz in a paper that appears the week of March 23 in the online edition of the Proceedings of the National Academy of Sciences (PNAS). The results of their calculations and simulations suggest the possibility of a new picture of the early solar system that would help to answer a number of outstanding questions about the current makeup of the solar system and of Earth itself. For example, the new work addresses why the terrestrial planets in our solar system have such relatively low masses compared to the planets orbiting other sun-like stars.

"Our work suggests that Jupiter's inward-outward migration could have destroyed a first generation of planets and set the stage for the formation of the mass-depleted terrestrial planets that our solar system has today," says Batygin, an assistant professor of planetary science. "All of this fits beautifully with other recent developments in understanding how the solar system evolved, while filling in some gaps."

Thanks to recent surveys of exoplanets—planets in solar systems other than our own—we know that about half of sun-like stars in our galactic neighborhood have orbiting planets. Yet those systems look nothing like our own. In our solar system, very little lies within Mercury's orbit; there is only a little debris—probably near-Earth asteroids that moved further inward—but certainly no planets. That is in sharp contrast with what astronomers see in most planetary systems. These systems typically have one or more planets that are substantially more massive than Earth orbiting closer to their suns than Mercury does, but very few objects at distances beyond.

"Indeed, it appears that the solar system today is not the common representative of the galactic planetary census. Instead we are something of an outlier," says Batygin. "But there is no reason to think that the dominant mode of planet formation throughout the galaxy should not have occurred here. It is more likely that subsequent changes have altered its original makeup."

According to Batygin and Laughlin, Jupiter is critical to understanding how the solar system came to be the way it is today. Their model incorporates something known as the Grand Tack scenario, which was first posed in 2001 by a group at Queen Mary University of London and subsequently revisited in 2011 by a team at the Nice Observatory. That scenario says that during the first few million years of the solar system's lifetime, when planetary bodies were still embedded in a disk of gas and dust around a relatively young sun, Jupiter became so massive and gravitationally influential that it was able to clear a gap in the disk. And as the sun pulled the disk's gas in toward itself, Jupiter also began drifting inward, as though carried on a giant conveyor belt.

"Jupiter would have continued on that belt, eventually being dumped onto the sun if not for Saturn," explains Batygin. Saturn formed after Jupiter but got pulled toward the sun at a faster rate, allowing it to catch up. Once the two massive planets got close enough, they locked into a special kind of relationship called an orbital resonance, where their orbital periods were rational—that is, expressible as a ratio of whole numbers. In a 2:1 orbital resonance, for example, Saturn would complete two orbits around the sun in the same amount of time that it took Jupiter to make a single orbit. In such a relationship, the two bodies would begin to exert a gravitational influence on one another.

"That resonance allowed the two planets to open up a mutual gap in the disk, and they started playing this game where they traded angular momentum and energy with one another, almost to a beat," says Batygin. Eventually, that back and forth would have caused all of the gas between the two worlds to be pushed out, a situation that would have reversed the planets' migration direction and sent them back outward in the solar system. (Hence, the "tack" part of the Grand Tack scenario: the planets migrate inward and then change course dramatically, something like a boat tacking around a buoy.)

In an earlier model developed by Bradley Hansen at UCLA, the terrestrial planets conveniently end up in their current orbits with their current masses under a particular set of circumstances—one in which all of the inner solar system's planetary building blocks, or planetesimals, happen to populate a narrow ring stretching from 0.7 to 1 astronomical unit (1 astronomical unit is the average distance from the sun to Earth), 10 million years after the sun's formation. According to the Grand Tack scenario, the outer edge of that ring would have been delineated by Jupiter as it moved toward the sun on its conveyor belt and cleared a gap in the disk all the way to Earth's current orbit.

But what about the inner edge? Why should the planetesimals be limited to the ring on the inside? "That point had not been addressed," says Batygin.

He says the answer could lie in primordial super-Earths. The empty hole of the inner solar system corresponds almost exactly to the orbital neighborhood where super-Earths are typically found around other stars. It is therefore reasonable to speculate that this region was cleared out in the primordial solar system by a group of first-generation planets that did not survive.

Batygin and Laughlin's calculations and simulations show that as Jupiter moved inward, it pulled all the planetesimals it encountered along the way into orbital resonances and carried them toward the sun. But as those planetesimals got closer to the sun, their orbits also became elliptical. "You cannot reduce the size of your orbit without paying a price, and that turns out to be increased ellipticity," explains Batygin. Those new, more elongated orbits caused the planetesimals, mostly on the order of 100 kilometers in radius, to sweep through previously unpenetrated regions of the disk, setting off a cascade of collisions among the debris. In fact, Batygin's calculations show that during this period, every planetesimal would have collided with another object at least once every 200 years, violently breaking them apart and sending them decaying into the sun at an increased rate.

The researchers did one final simulation to see what would happen to a population of super-Earths in the inner solar system if they were around when this cascade of collisions started. They ran the simulation on a well-known extrasolar system known as Kepler-11, which features six super-Earths with a combined mass 40 times that of Earth, orbiting a sun-like star. The result? The model predicts that the super-Earths would be shepherded into the sun by a decaying avalanche of planetesimals over a period of 20,000 years.

"It's a very effective physical process," says Batygin. "You only need a few Earth masses worth of material to drive tens of Earth masses worth of planets into the sun."

Batygin notes that when Jupiter tacked around, some fraction of the planetesimals it was carrying with it would have calmed back down into circular orbits. Only about 10 percent of the material Jupiter swept up would need to be left behind to account for the mass that now makes up Mercury, Venus, Earth, and Mars.

From that point, it would take millions of years for those planetesimals to clump together and eventually form the terrestrial planets—a scenario that fits nicely with measurements that suggest that Earth formed 100–200 million years after the birth of the sun. Since the primordial disk of hydrogen and helium gas would have been long gone by that time, this could also explain why Earth lacks a hydrogen atmosphere. "We formed from this volatile-depleted debris," says Batygin.

And that sets us apart in another way from the majority of exoplanets. Batygin expects that most exoplanets—which are mostly super-Earths—have substantial hydrogen atmospheres, because they formed at a point in the evolution of their planetary disk when the gas would have still been abundant. "Ultimately, what this means is that planets truly like Earth are intrinsically not very common," he says.

The paper also suggests that the formation of gas giant planets such as Jupiter and Saturn—a process that planetary scientists believe is relatively rare—plays a major role in determining whether a planetary system winds up looking something like our own or like the more typical systems with close-in super-Earths. As planet hunters identify additional systems that harbor gas giants, Batygin and Laughlin will have more data against which they can check their hypothesis—to see just how often other migrating giant planets set off collisional cascades in their planetary systems, sending primordial super-Earths into their host stars.

 The researchers describe their work in a paper titled "Jupiter's Decisive Role in the Inner Solar System's Early Evolution."

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Our Solar System May Have Once Harbored Super-Earths
Listing Title: 
Our Solar System May Have Once Harbored Super-Earths
Writer: 
Exclude from News Hub: 
No
Short Title: 
Super-Earths In Our Solar System?
News Type: 
Research News

Caltech Scientists Develop Cool Process to Make Better Graphene

A new technique invented at Caltech to produce graphene—a material made up of an atom-thick layer of carbon—at room temperature could help pave the way for commercially feasible graphene-based solar cells and light-emitting diodes, large-panel displays, and flexible electronics.

"With this new technique, we can grow large sheets of electronic-grade graphene in much less time and at much lower temperatures," says Caltech staff scientist David Boyd, who developed the method.

Boyd is the first author of a new study, published in the March 18 issue of the journal Nature Communications, detailing the new manufacturing process and the novel properties of the graphene it produces.

Graphene could revolutionize a variety of engineering and scientific fields due to its unique properties, which include a tensile strength 200 times stronger than steel and an electrical mobility that is two to three orders of magnitude better than silicon. The electrical mobility of a material is a measure of how easily electrons can travel across its surface.

However, achieving these properties on an industrially relevant scale has proven to be complicated. Existing techniques require temperatures that are much too hot—1,800 degrees Fahrenheit, or 1,000 degrees Celsius—for incorporating graphene fabrication with current electronic manufacturing. Additionally, high-temperature growth of graphene tends to induce large, uncontrollably distributed strain—deformation—in the material, which severely compromises its intrinsic properties.   

"Previously, people were only able to grow a few square millimeters of high-mobility graphene at a time, and it required very high temperatures, long periods of time, and many steps," says Caltech physics professor Nai-Chang Yeh, the Fletcher Jones Foundation Co-Director of the Kavli Nanoscience Institute and the corresponding author of the new study. "Our new method can consistently produce high-mobility and nearly strain-free graphene in a single step in just a few minutes without high temperature. We have created sample sizes of a few square centimeters, and since we think that our method is scalable, we believe that we can grow sheets that are up to several square inches or larger, paving the way to realistic large-scale applications."

The new manufacturing process might not have been discovered at all if not for a fortunate turn of events. In 2012, Boyd, then working in the lab of the late David Goodwin, at that time a Caltech professor of mechanical engineering and applied physics, was trying to reproduce a graphene-manufacturing process he had read about in a scientific journal. In this process, heated copper is used to catalyze graphene growth. "I was playing around with it on my lunch hour," says Boyd, who now works with Yeh's research group. "But the recipe wasn't working. It seemed like a very simple process. I even had better equipment than what was used in the original experiment, so it should have been easier for me."

During one of his attempts to reproduce the experiment, the phone rang. While Boyd took the call, he unintentionally let a copper foil heat for longer than usual before exposing it to methane vapor, which provides the carbon atoms needed for graphene growth.

When later Boyd examined the copper plate using Raman spectroscopy, a technique used for detecting and identifying graphene, he saw evidence that a graphene layer had indeed formed. "It was an 'A-ha!' moment," Boyd says. "I realized then that the trick to growth is to have a very clean surface, one without the copper oxide."

As Boyd recalls, he then remembered that Robert Millikan, a Nobel Prize–winning physicist and the head of Caltech from 1921 to 1945, also had to contend with removing copper oxide when he performed his famous 1916 experiment to measure Planck's constant, which is important for calculating the amount of energy a single particle of light, or photon, contains. Boyd wondered if he, like Millikan, could devise a method for cleaning his copper while it was under vacuum conditions.



Schematic of the Caltech growth process for graphene.
(Courtesy of Nature Communications)

The solution Boyd hit upon was to use a system first developed in the 1960s to generate a hydrogen plasma—that is, hydrogen gas that has been electrified to separate the electrons from the protons—to remove the copper oxide at much lower temperatures. His initial experiments revealed not only that the technique worked to remove the copper oxide, but that it simultaneously produced graphene as well.

At first, Boyd could not figure out why the technique was so successful. He later discovered that two leaky valves were letting in trace amounts of methane into the experiment chamber. "The valves were letting in just the right amount of methane for graphene to grow," he says.

The ability to produce graphene without the need for active heating not only reduces manufacturing costs, but also results in a better product because fewer defects—introduced as a result of thermal expansion and contraction processes—are generated. This in turn eliminates the need for multiple postproduction steps. "Typically, it takes about ten hours and nine to ten different steps to make a batch of high-mobility graphene using high-temperature growth methods," Yeh says. "Our process involves one step, and it takes five minutes."

Work by Yeh's group and international collaborators later revealed that graphene made using the new technique is of higher quality than graphene made using conventional methods: It is stronger because it contains fewer defects that could weaken its mechanical strength, and it has the highest electrical mobility yet measured for synthetic graphene.



Images of early-stage growth of graphene on copper. The lines of hexagons are graphene nuclei, with increasing magnification from left to right, where the scale bars from left to right correspond to 10 μm, 1 μm, and 200 nm, respectively. The hexagons grow together into a seamless sheet of graphene. (Courtesy of Nature Communications)

The team thinks one reason their technique is so efficient is that a chemical reaction between the hydrogen plasma and air molecules in the chamber's atmosphere generates cyano radicals—carbon-nitrogen molecules that have been stripped of their electrons. Like tiny superscrubbers, these charged molecules effectively scour the copper of surface imperfections providing a pristine surface on which to grow graphene.

The scientists also discovered that their graphene grows in a special way. Graphene produced using conventional thermal processes grows from a random patchwork of depositions. But graphene growth with the plasma technique is more orderly. The graphene deposits form lines that then grow into a seamless sheet, which contributes to its mechanical and electrical integrity.

A scaled-up version of their plasma technique could open the door for new kinds of electronics manufacturing, Yeh says. For example, graphene sheets with low concentrations of defects could be used to protect materials against degradation from exposure to the environment. Another possibility would be to grow large sheets of graphene that can be used as a transparent conducting electrode for solar cells and display panels. "In the future, you could have graphene-based cell-phone displays that generate their own power," Yeh says.



Atomically resolved scanning tunneling microscopic images of graphene grown on a copper (111) single crystal, with increasing magnification from left to right. (Courtesy of Nature Communications)

Another possibility, she says, is to introduce intentional imperfections into graphene's lattice structure to create specific mechanical and electronic attributes. "If you can strain graphene by design at the nanoscale, you can artificially engineer its properties. But for this to work, you need to start with a perfectly smooth, strain-free sheet of graphene," Yeh says. "You can't do this if you have a sheet of graphene that has uncontrollable defects in different places."

Along with Yeh and Boyd, additional authors on the paper, "Single-Step Deposition of High-Mobility Graphene at Reduced Temperatures," include Caltech graduate students Wei Hsiang Lin, Chen Chih Hsu and Chien-Chang Chen; Caltech staff scientist Marcus Teague; Yuan-Yen Lo, Tsung-Chih Cheng, and Chih-I Wu of National Taiwan University; and Wen-Yuan Chan, Wei-Bing Su, and Chia-Seng Chang of the Institute of Physics, Academia Sinica. Funding support for the study at Caltech was provided by the National Science Foundation, under the Institute of Quantum Information and Matter, and by the Gordon and Betty Moore Foundation and the Kavli Foundation through the Kavli Nanoscience Institute. The work in Taiwan was supported by the Taiwanese National Science Council.

Images reprinted from Nature Communications, "Single-Step Deposition of High-Mobility Graphene at Reduced Temperatures," March 18, 2015, with permission from Nature Communications.

Frontpage Title: 
A Cool Process to Make Better Graphene
Listing Title: 
A Cool Process to Make Better Graphene
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Friction Means Antarctic Glaciers More Sensitive to Climate Change Than We Thought

One of the biggest unknowns in understanding the effects of climate change today is the melting rate of glacial ice in Antarctica. Scientists agree rising atmospheric and ocean temperatures could destabilize these ice sheets, but there is uncertainty about how fast they will lose ice.

The West Antarctic Ice Sheet is of particular concern to scientists because it contains enough ice to raise global sea level by up to 16 feet, and its physical configuration makes it susceptible to melting by warm ocean water. Recent studies have suggested that the collapse of certain parts of the ice sheet is inevitable. But will that process take several decades or centuries?

Research by Caltech scientists now suggests that estimates of future rates of melt for the West Antarctic Ice Sheet—and, by extension, of future sea-level rise—have been too conservative. In a new study, published online on March 9 in the Journal of Glaciology, a team led by Victor Tsai, an assistant professor of geophysics, found that properly accounting for Coulomb friction—a type of friction generated by solid surfaces sliding against one another—in computer models significantly increases estimates of how sensitive the ice sheet is to temperature perturbations driven by climate change.

Unlike other ice sheets that are moored to land above the ocean, most of West Antarctica's ice sheet is grounded on a sloping rock bed that lies below sea level. In the past decade or so, scientists have focused on the coastal part of the ice sheet where the land ice meets the ocean, called the "grounding line," as vital for accurately determining the melting rate of ice in the southern continent.

"Our results show that the stability of the whole ice sheet and our ability to predict its future melting is extremely sensitive to what happens in a very small region right at the grounding line. It is crucial to accurately represent the physics here in numerical models," says study coauthor Andrew Thompson, an assistant professor of environmental science and engineering at Caltech.

Part of the seafloor on which the West Antarctic Ice Sheet rests slopes upward toward the ocean in what scientists call a "reverse slope gradient." The end of the ice sheet also floats on the ocean surface so that ocean currents can deliver warm water to its base and melt the ice from below. Scientists think this "basal melting" could cause the grounding line to retreat inland, where the ice sheet is thicker. Because ice thickness is a key factor in controlling ice discharge near the coast, scientists worry that the retreat of the grounding line could accelerate the rate of interior ice flow into the oceans. Grounding line recession also contributes to the thinning and melting away of the region's ice shelves—thick, floating extensions of the ice sheet that help reduce the flow of ice into the sea.

According to Tsai, many earlier models of ice sheet dynamics tried to simplify calculations by assuming that ice loss is controlled solely by viscous stresses, that is, forces that apply to "sticky fluids" such as honey—or in this case, flowing ice. The conventional models thus accounted for the flow of ice around obstacles but ignored friction. "Accounting for frictional stresses at the ice sheet bottom in addition to the viscous stresses changes the physical picture dramatically," Tsai says.

In their new study, Tsai's team used computer simulations to show that even though Coulomb friction affects only a relatively small zone on an ice sheet, it can have a big impact on ice stream flow and overall ice sheet stability.

In most previous models, the ice sheet sits firmly on the bed and generates a downward stress that helps keep it attached it to the seafloor. Furthermore, the models assumed that this stress remains constant up to the grounding line, where the ice sheet floats, at which point the stress disappears.

Tsai and his team argue that their model provides a more realistic representation—in which the stress on the bottom of the ice sheet gradually weakens as one approaches the coasts and grounding line, because the weight of the ice sheet is increasingly counteracted by water pressure at the glacier base. "Because a strong basal shear stress cannot occur in the Coulomb model, it completely changes how the forces balance at the grounding line," Thompson says.

Tsai says the idea of investigating the effects of Coulomb friction on ice sheet dynamics came to him after rereading a classic study on the topic by American metallurgist and glaciologist Johannes Weertman from Northwestern University. "I wondered how might the behavior of the ice sheet differ if one factored in this water-pressure effect from the ocean, which Weertman didn't know would be important when he published his paper in 1974," Tsai says.

Tsai thought about how this could be achieved and realized the answer might lie in another field in which he is actively involved: earthquake research. "In seismology, Coulomb friction is very important because earthquakes are thought to be the result of the edge of one tectonic plate sliding against the edge of another plate frictionally," Tsai said. "This ice sheet research came about partly because I'm working on both glaciology and earthquakes."

If the team's Coulomb model is correct, it could have important implications for predictions of ice loss in Antarctica as a result of climate change. Indeed, for any given increase in temperature, the model predicts a bigger change in the rate of ice loss than is forecasted in previous models. "We predict that the ice sheets are more sensitive to perturbations such as temperature," Tsai says.

Hilmar Gudmundsson, a glaciologist with the British Antarctic Survey in Cambridge, UK, called the team's results "highly significant." "Their work gives further weight to the idea that a marine ice sheet, such as the West Antarctic Ice Sheet, is indeed, or at least has the potential to become, unstable," says Gudmundsson, who was not involved in the study.

Glaciologist Richard Alley, of Pennsylvania State University, noted that historical studies have shown that ice sheets can remain stable for centuries or millennia and then switch to a different configuration suddenly.

"If another sudden switch happens in West Antarctica, sea level could rise a lot, so understanding what is going on at the grounding lines is essential," says Alley, who also did not participate in the research.

"Tsai and coauthors have taken another important step in solving this difficult problem," he says.

Along with Tsai and Thompson, Andrew Stewart, an assistant professor of atmospheric and oceanic sciences at UCLA, was also a coauthor on the paper, "Marine ice sheet profiles and stability under Coulomb basal conditions." Funding support for the study was provided by Caltech's President's and Director's Fund program and the Stanback Discovery Fund for Global Environmental Science.

Frontpage Title: 
Ice Sheets Melting Faster than Expected?
Listing Title: 
Ice Sheets Melting Faster than Expected?
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

One Step Closer to Artificial Photosynthesis and "Solar Fuels"

Caltech scientists, inspired by a chemical process found in leaves, have developed an electrically conductive film that could help pave the way for devices capable of harnessing sunlight to split water into hydrogen fuel.

When applied to semiconducting materials such as silicon, the nickel oxide film prevents rust buildup and facilitates an important chemical process in the solar-driven production of fuels such as methane or hydrogen.

"We have developed a new type of protective coating that enables a key process in the solar-driven production of fuels to be performed with record efficiency, stability, and effectiveness, and in a system that is intrinsically safe and does not produce explosive mixtures of hydrogen and oxygen," says Nate Lewis, the George L. Argyros Professor and professor of chemistry at Caltech and a coauthor of a new study, published the week of March 9 in the online issue of the journal the Proceedings of the National Academy of Sciences, that describes the film.

The development could help lead to safe, efficient artificial photosynthetic systems—also called solar-fuel generators or "artificial leaves"—that replicate the natural process of photosynthesis that plants use to convert sunlight, water, and carbon dioxide into oxygen and fuel in the form of carbohydrates, or sugars.

The artificial leaf that Lewis' team is developing in part at Caltech's Joint Center for Artificial Photosynthesis (JCAP) consists of three main components: two electrodes—a photoanode and a photocathode—and a membrane. The photoanode uses sunlight to oxidize water molecules to generate oxygen gas, protons, and electrons, while the photocathode recombines the protons and electrons to form hydrogen gas. The membrane, which is typically made of plastic, keeps the two gases separate in order to eliminate any possibility of an explosion, and lets the gas be collected under pressure to safely push it into a pipeline.

Scientists have tried building the electrodes out of common semiconductors such as silicon or gallium arsenide—which absorb light and are also used in solar panels—but a major problem is that these materials develop an oxide layer (that is, rust) when exposed to water.

Lewis and other scientists have experimented with creating protective coatings for the electrodes, but all previous attempts have failed for various reasons. "You want the coating to be many things: chemically compatible with the semiconductor it's trying to protect, impermeable to water, electrically conductive, highly transparent to incoming light, and highly catalytic for the reaction to make oxygen and fuels," says Lewis, who is also JCAP's scientific director. "Creating a protective layer that displayed any one of these attributes would be a significant leap forward, but what we've now discovered is a material that can do all of these things at once."

The team has shown that its nickel oxide film is compatible with many different kinds of semiconductor materials, including silicon, indium phosphide, and cadmium telluride. When applied to photoanodes, the nickel oxide film far exceeded the performance of other similar films—including one that Lewis's group created just last year. That film was more complicated—it consisted of two layers versus one and used as its main ingredient titanium dioxide (TiO2, also known as titania), a naturally occurring compound that is also used to make sunscreens, toothpastes, and white paint.

"After watching the photoanodes run at record performance without any noticeable degradation for 24 hours, and then 100 hours, and then 500 hours, I knew we had done what scientists had failed to do before," says Ke Sun, a postdoc in Lewis's lab and the first author of the new study.

Lewis's team developed a technique for creating the nickel oxide film that involves smashing atoms of argon into a pellet of nickel atoms at high speeds, in an oxygen-rich environment. "The nickel fragments that sputter off of the pellet react with the oxygen atoms to produce an oxidized form of nickel that gets deposited onto the semiconductor," Lewis says.

Crucially, the team's nickel oxide film works well in conjunction with the membrane that separates the photoanode from the photocathode and staggers the production of hydrogen and oxygen gases.

"Without a membrane, the photoanode and photocathode are close enough to each other to conduct electricity, and if you also have bubbles of highly reactive hydrogen and oxygen gases being produced in the same place at the same time, that is a recipe for disaster," Lewis says. "With our film, you can build a safe device that will not explode, and that lasts and is efficient, all at once."

Lewis cautions that scientists are still a long way off from developing a commercial product that can convert sunlight into fuel. Other components of the system, such as the photocathode, will also need to be perfected.

"Our team is also working on a photocathode," Lewis says. "What we have to do is combine both of these elements together and show that the entire system works. That will not be easy, but we now have one of the missing key pieces that has eluded the field for the past half-century."

Along with Lewis and Sun, additional authors on the paper, "Stable solar-driven oxidation of water by semiconducting photoanodes protected by transparent catalytic nickel oxide films," include Caltech graduate students Fadl Saadi, Michael Lichterman, Xinghao Zhou, Noah Plymale, and Stefan Omelchenko; William Hale, from the University of Southampton; Hsin-Ping Wang and Jr-Hau He, from King Abdullah University in Saudi Arabia; Kimberly Papadantonakis, a scientific research manager at Caltech; and Bruce Brunschwig, the director of the Molecular Materials Research Center at Caltech. Funding was provided by the Office of Science at the U.S. Department of Energy, the National Science Foundation, the Beckman Institute, and the Gordon and Betty Moore Foundation.

Frontpage Title: 
Thin Film Clears Path to Solar Fuels
Listing Title: 
Thin Film Clears Path to Solar Fuels
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Research Suggests Brain's Melatonin May Trigger Sleep

If you walk into your local drug store and ask for a supplement to help you sleep, you might be directed to a bottle labeled "melatonin." The hormone supplement's use as a sleep aid is supported by anecdotal evidence and even some reputable research studies. However, our bodies also make melatonin naturally, and until a recent Caltech study using zebrafish, no one knew how—or even if—this melatonin contributed to our natural sleep. The new work suggests that even in the absence of a supplement, naturally occurring melatonin may help us fall and stay asleep.

The study was published online in the March 5 issue of the journal Neuron.

"When we first tell people that we're testing whether melatonin is involved in sleep, the response is often, 'Don't we already know that?'" says Assistant Professor of Biology David Prober. "This is a reasonable response based on articles in newspapers and melatonin products available on the Internet. However, while some scientific studies show that supplemental melatonin can help to promote sleep, many studies failed to observe this, so the effectiveness of melatonin supplements is controversial. More importantly, these studies don't tell you anything about what naturally occurring melatonin normally does in the body."

There are several factors at play when you are starting to feel tired. Sleep is thought to be regulated by two mechanisms: a homeostatic mechanism, which responds to the body's internal cues for sleep, and a circadian mechanism that responds to external cues such as darkness and light, signaling appropriate times for sleep and wakefulness.

For years, researchers have known that melatonin production is regulated by the circadian clock, and that animals produce more of the hormone at night than they do during the day. However, this fact alone is not enough to prove that melatonin promotes sleep. For example, although nocturnal animals sleep during the day and are active at night, they also produce the most melatonin at night.

In the hopes of determining, once and for all, what role the hormone actually plays in sleep, Prober and his team at Caltech designed an experiment using the larvae of zebrafish, an organism commonly used in research studies because of its small size and well-characterized genome. Like humans, zebrafish are also diurnal—awake during the day and asleep at night—and produce melatonin at night.

But how exactly can you tell if a young zebrafish has fallen asleep? There are behavioral criteria—including how long a zebrafish takes to respond to a stimulus, like a knock on the tank, for example. "Based on these criteria, we found that if the zebrafish larvae don't move for one or more minutes, they are in a sleep-like state," Prober says.

To test the effect of naturally occurring melatonin on sleep, the researchers first compared the sleep patterns of normal, or "wild-type," zebrafish larvae to those of zebrafish larvae that are unable to produce the hormone because of a mutation in a gene called aanat2. They found that fish with the mutation slept only half as long as normal fish. And although a normal zebrafish begins to fall asleep about 10 minutes after "lights out"—about the same amount of time it takes a human to fall asleep—it took the aanat2 mutant fish about twice as long.

"This result was surprising because it suggests that almost half of the sleep that the larvae are getting at night is due to the effects of melatonin," Prober says. "That suggests that melatonin normally plays an important role in sleep and that you need this natural melatonin both to fall asleep and to stay asleep."

In both humans and zebrafish, melatonin is produced in a part of the brain called the pineal gland. To confirm that the mutation-induced reduction in sleep was actually due to a lack of melatonin, the researchers next used a drug to specifically kill the cells of the pineal gland, thus halting the hormone's production. The drug-treated fish showed the same reduction in sleep as fish with mutated aanat2. When the drug treatment stopped, allowing pineal gland cells to regenerate, the fish returned to a normal sleep pattern.

Sleep patterns, like many other biological and behavioral processes, are known to be regulated by the circadian clock. In an organism, the circadian clock aligns these processes with daily changes in the environment, such as daylight and darkness at night. However, while a great deal is known about how the circadian clock works, it was not known how the clock regulates sleep. Because the researchers had determined that melatonin is involved in promoting natural sleep, they next asked whether melatonin mediates the circadian regulation of sleep.

They first raised both wild-type and aanat2 mutant zebrafish larvae in a normal light/dark cycle—14 hours of light followed by 10 hours of darkness—to entrain their circadian clocks. Then, when the larvae were 5 days old, they switched both populations to an environment of constant darkness. In this "free running" condition, the circadian clock continues to function in the absence of daily light and dark signals from the environment. As expected, the wild-type fish maintained their regular circadian sleep cycle. The melatonin-lacking aanat2 mutants, however, showed no cyclical sleep patterns.  

"This was really surprising," says Prober. "For years, people have been looking in rodents for a factor that's required for the circadian regulation of sleep and have found a few other candidate molecules that, like melatonin, are regulated by the circadian clock and can induce sleep when given as supplements. However, mutants that lack these factors had normal circadian sleep cycles," says Prober. "One thought was that maybe all of these molecules work together and that you'd have to make mutations in multiple genes to see an effect. But we found that eliminating one molecule, melatonin, is the whole show. It's one of those rare and surprisingly clear results."

After finding that melatonin is necessary for the circadian regulation of sleep, Prober next wanted to ask how it does this. To find out, Prober and his colleagues looked to a neuromodulator called adenosine—part of the homeostatic mechanism that promotes sleep. As an animal expends energy throughout the day, adenosine accumulates in the brain causing the animal to feel more and more tired—a pressure that is relieved through sleep.

The researchers treated both wild-type and melatonin-deficient aanat2 mutant fish with drugs that activate adenosine signaling. They found that although the drugs had no effect on the wild-type fish, they restored normal sleep amounts in aanat2 mutants. This result suggests that melatonin may be promoting sleep, in part, by turning on adenosine—providing a long sought-after link between the homeostatic and circadian processes that regulate sleep.

Prober and his colleagues hypothesize that the circadian clock drives the production of melatonin, which then promotes sleep through yet-to-be-determined mechanisms while also stimulating adenosine production, thus promoting sleep through the homeostatic pathway. Although more experiments are needed to confirm this model, Prober says that the preliminary results may offer insights about human sleep as well.

"Zebrafish are vertebrates and their brain is structurally similar to ours. All of the markers that we and others have tested are expressed in the same regions of the zebrafish brain as in the mammalian brain," he says. "Zebrafish sleep and human sleep are likely different in some ways, but all of our drug and genetic data indicate that the same factors—working through the same mechanisms—have similar effects on sleep in zebrafish and mammals. "

Prober's work with the circadian regulation of sleep follows in the conceptual—and physical—footsteps of late Caltech geneticist Seymour Benzer, who founded genetic studies of the circadian clock. In experiments in fruit flies, Benzer and his graduate student, the late Ronald Konopka (PhD '72), discovered the first circadian-rhythm mutants. Benzer passed away in 2007, and when Prober came to Caltech in 2009, he was offered Benzer's former office and lab space. "Seymour Benzer's work in fruit flies launched the beginning of our understanding of the molecular circadian clock," Prober says, "so it's really special to be in this space, and it's gratifying that we're taking the next step based on his work."

The results of Prober's study are published in the journal Neuron in an article titled, "Melatonin is required for the circadian regulation of sleep." Other Caltech coauthors on the paper are graduate student Avni Gandhi and postdoctoral scholars Eric Mosser and Grigorios Oikonomou. This work was funded by grants from the National Institutes of Health, the Mallinckrodt Foundation, the Rita Allen Foundation, the Brain and Behavior Research Foundation as well as a Della Martin Postdoctoral Fellowship to Mosser.

Frontpage Title: 
Feeling Sleepy? Might be the Melatonin
Listing Title: 
Feeling Sleepy? Might be the Melatonin
Writer: 
Exclude from News Hub: 
No
Short Title: 
Feeling Sleepy? Might be the Melatonin
News Type: 
Research News

Fighting a Worm with Its Own Genome

Tiny parasitic hookworms infect nearly half a billion people worldwide—almost exclusively in developing countries—causing health problems ranging from gastrointestinal issues to cognitive impairment and stunted growth in children. By sequencing and analyzing the genome of one particular hookworm species, Caltech researchers have uncovered new information that could aid the fight against these parasites.  

The results of their work were published online in the March 2 issue of the journal Nature Genetics.

"Hookworms infect a huge percentage of the human population. Getting clean water and sanitation to the most affected regions would help to ameliorate hookworms and a number of other parasites, but since these are big, complicated challenges that are difficult to address, we need to also be working on drugs to treat them," says study lead Paul Sternberg, the Thomas Hunt Morgan Professor of Biology at Caltech and a Howard Hughes Medical Institute investigator.

Medicines have been developed to treat hookworm infections, but the parasites have begun to develop resistance to these drugs. As part of the search for effective new drugs, Sternberg and his colleagues investigated the genome of a hookworm species known as Ancylostoma ceylanicum. Other hookworm species cause more disease among humans, but A. ceylanicum piqued the interest of the researchers because it also infects some species of rodents that are commonly used for research. This means that the researchers can easily study the parasite's entire infection process inside the laboratory.

The team began by sequencing all 313 million nucleotides of the A. ceylanicum genome using the next-generation sequencing capabilities of the Millard and Muriel Jacobs Genetics and Genomics Laboratory at Caltech. In next-generation sequencing, a large amount of DNA—such as a genome—is first reproduced as many very short sequences. Then, computer programs match up common sequences in the short strands to piece them into much longer strands.

"Assembling the short sequences correctly can be a relatively difficult analysis to carry out, but we have experience sequencing worm genomes in this way, so we are quite successful," says Igor Antoshechkin, director of the Jacobs Laboratory. 

Their sequencing results revealed that although the A. ceylanicum genome is only about 10 percent of the size of the human genome, it actually encodes at least 30 percent more genes—about 30,000 in total, compared to approximately 20,000-23,000 in the human genome. However, of these 30,000 genes, the essential genes that are turned on specifically when the parasite is wreaking havoc on its host are the most relevant to the development of potential drugs to fight the worm.

Sternberg and his colleagues wanted to learn more about those active genes, so they looked not to DNA but to RNA—the genetic material that is generated (or transcribed) from the DNA template of active genes and from which proteins are made. Specifically, they examined the RNA generated in an A. ceylanicum worm during infection. Using this RNA, the team found more than 900 genes that are turned on only when the worm infects its host—including 90 genes that belong to a never-before-characterized family of proteins called activation-associated secreted protein related genes, or ASPRs.

"If you go back and look at other parasitic worms, you notice that they have these ASPRs as well," Sternberg says. "So basically we found this new family of proteins that are unique to parasitic worms, and they are related to this early infection process." Since the worm secretes these ASPR proteins early in the infection, the researchers think that these proteins might block the host's initial immune response—preventing the host's blood from clotting and ensuring a free-flowing food source for the blood-sucking parasite.

If ASPRs are necessary for this parasite to invade the host, then a drug that targets and destroys the proteins could one day be used to fight the parasite. Unfortunately, however, it is probably not that simple, Sternberg says.

"If we have 90 of these ASPRs, it might be that a drug would get rid of just a few of them and stop the infection, but maybe you'd have to get rid of all 90 of them for it to work. And that's a problem," he says. "It's going to take a lot more careful study to understand the functions of these ASPRs so we can target the ones that are key regulatory molecules."

Drugs that target ASPRs might one day be used to treat these parasitic infections, but these proteins also hold the potential for anti-A. ceylanicum vaccines—which would prevent these parasites from infecting a host in the first place, Sternberg adds. For example, if a person were injected with an ASPR protein vaccine before travelling to an infection-prone region, their immune system might be more prepared to successfully fend off an infection.

"A parasitic infection is a balance between the parasites trying to suppress the immune system and the host trying to attack the parasite," says Sternberg. "And we hope that by analyzing the genome, we can uncover clues that might help us alter that balance in favor of the host."

These findings were published in a paper titled, "The genome and transcriptome of the zoonotic hookworm Ancylostoma ceylanicum identify infection-specific gene families." In addition to Sternberg and Antoshechkin, other coauthors include Erich M. Schwarz of Cornell University; and Yan Hu, Melanie Miller, and Raffi V. Aroian from UC San Diego. Sternberg's work was funded by the National Institutes of Health and the Howard Hughes Medical Institute.

Frontpage Title: 
Knocking Out Parasites with Their Own Genetic Code
Listing Title: 
Knocking Out Parasites with Their Own Genetic Code
Writer: 
Exclude from News Hub: 
No
Short Title: 
Fighting a Worm with Its Own Genome
News Type: 
Research News

Caltech Biochemist Sheds Light on Structure of Key Cellular 'Gatekeeper'

Facing a challenge akin to solving a 1,000-piece jigsaw puzzle while blindfolded—and without touching the pieces—many structural biochemists thought it would be impossible to determine the atomic structure of a massive cellular machine called the nuclear pore complex (NPC), which is vital for cell survival.

But after 10 years of attacking the problem, a team led by André Hoelz, assistant professor of chemistry, recently solved almost a third of the puzzle. The approach his team developed to do so also promises to speed completion of the remainder.

In an article published online February 12 by Science Express, Hoelz and his colleagues describe the structure of a significant portion of the NPC, which is made up of many copies of about 34 different proteins, perhaps 1,000 proteins in all and a total of 10 million atoms. In eukaryotic cells (those with a membrane-bound nucleus), the NPC forms a transport channel in the nuclear membrane. The NPC serves as a gatekeeper, essentially deciding which proteins and other molecules are permitted to pass into and out of the nucleus. The survival of cells is dependent upon the accuracy of these decisions.

Understanding the structure of the NPC could lead to new classes of cancer drugs as well as antiviral medicines. "The NPC is a huge target of viruses," Hoelz says. Indeed, pathogens such as HIV and Ebola subvert the NPC as a way to take control of cells, rendering them incapable of functioning normally. Figuring out just how the NPC works might enable the design of new drugs to block such intruders.

"This is an incredibly important structure to study," he says, "but because it is so large and complex, people thought it was crazy to work on it. But 10 years ago, we hypothesized that we could solve the atomic structure with a divide-and-conquer approach—basically breaking the task into manageable parts—and we've shown that for a major section of the NPC, this actually worked."

To map the structure of the NPC, Hoelz relied primarily on X-ray crystallography, which involves shining X-rays on a crystallized sample and using detectors to analyze the pattern of rays reflected off the atoms in the crystal.

It is particularly challenging to obtain X-ray diffraction images of the intact NPC for several reasons, including that the NPC is both enormous (about 30 times larger than the ribosome, a large cellular component whose structure wasn't solved until the year 2000) and complex (with as many as 1,000 individual pieces, each composed of several smaller sections). In addition, the NPC is flexible, with many moving parts, making it difficult to capture in individual snapshots at the atomic level, as X-ray crystallography aims to do. Finally, despite being enormous compared to other cellular components, the NPC is still vanishingly small (only 120 nanometers wide, or about 1/900th the thickness of a dollar bill), and its highly flexible nature prohibits structure determination with current X-ray crystallography methods.

To overcome those obstacles, Hoelz and his team chose to determine the structure of the coat nucleoporin complex (CNC)—one of the two main complexes that make up the NPC—rather than tackling the whole structure at once (in total the NPC is composed of six subcomplexes, two major ones and four smaller ones, see illustration). He enlisted the support of study coauthor Anthony Kossiakoff of the University of Chicago, who helped to develop the engineered antibodies needed to essentially "superglue" the samples into place to form an ordered crystalline lattice so they could be properly imaged. The X-ray diffraction data used for structure determination was collected at the General Medical Sciences and National Cancer Institutes Structural Biology Beamline at the Argonne National Laboratory.

With the help of Caltech's Molecular Observatory—a facility, developed with support from the Gordon and Betty Moore Foundation, that includes a completely automated X-ray beamline at the Stanford Synchrotron Radiation Laboratory that can be controlled remotely from Caltech—Hoelz's team refined the antibody adhesives required to generate the best crystalline samples. This process alone took two years to get exactly right.

Hoelz and his team were able to determine the precise size, shape, and the position of all atoms of the CNC, and also its location within the entire NPC.

The CNC is not the first component of the NPC to be fully characterized, but it is by far the largest. Hoelz says that once the other major component—known as the adaptor–channel nucleoporin complex—and the four smaller subcomplexes are mapped, the NPC's structure will be fully understood.

The CNC that Hoelz and his team evaluated comes from baker's yeast—a commonly used research organism—but the CNC structure is the right size and shape to dock with the NPC of a human cell. "It fits inside like a hand in a glove," Hoelz says. "That's significant because it is a very strong indication that the architecture of the NPC in both are probably the same and that the machinery is so important that evolution has not changed it in a billion years."

Being able to successfully determine the structure of the CNC makes mapping the remainder of the NPC an easier proposition. "It's like climbing Mount Everest. Knowing you can do it lowers the bar, so you know you can now climb K2 and all these other mountains," says Hoelz, who is convinced that the entire NPC will be characterized soon. "It will happen. I don't know if it will be in five or 10 or 20 years, but I'm sure it will happen in my lifetime. We will have an atomic model of the entire nuclear pore."

Still, he adds, "My dream actually goes much farther. I don't really want to have a static image of the pore. What I really would like—and this is where people look at me with a bit of a smile on their face, like they're laughing a little bit—is to get an image of how the pore is moving, how the machine actually works. The pore is not a static hole, it can open up like the iris of a camera to let something through that's much bigger. How does it do it?"

To understand that machine in motion, he adds, "you don't just need one snapshot, you need multiple snapshots. But once you have one, you can infer the other ones much quicker, so that's the ultimate goal. That's the dream."

Along with Hoelz, additional Caltech authors on the paper, "Architecture of the Nuclear Pore Complex Coat," include postdoctoral scholars Tobias Stuwe and Ana R. Correia, and graduate student Daniel H. Lin. Coauthors from the University of Chicago Department of Biochemistry and Molecular Biology include Anthony Kossiakoff, Marcin Paduch and Vincent Lu. The work was supported by Caltech startup funds, the Albert Wyrick V Scholar Award of the V Foundation for Cancer Research, the 54th Mallinckrodt Scholar Award of the Edward Mallinckrodt, Jr. Foundation, and a Kimmel Scholar Award of the Sidney Kimmel Foundation for Cancer Research.

Frontpage Title: 
Chemists Solve Key Cellular Puzzle
Listing Title: 
Chemists Solve Key Cellular Puzzle
Writer: 
Exclude from News Hub: 
No
Short Title: 
Chemists Solve Key Cellular Puzzle
News Type: 
Research News

How Iron Feels the Heat

As you heat up a piece of iron, the arrangement of the iron atoms changes several times before melting. This unusual behavior is one reason why steel, in which iron plays a starring role, is so sturdy and ubiquitous in everything from teapots to skyscrapers. But the details of just how and why iron takes on so many different forms have remained a mystery. Recent work at Caltech in the Division of Engineering and Applied Science, however, provides evidence for how iron's magnetism plays a role in this curious property—an understanding that could help researchers develop better and stronger steel.

"Humans have been working with regular old iron for thousands of years, but this is a piece about its thermodynamics that no one has ever really understood," says Brent Fultz, the Barbara and Stanley R. Rawn, Jr., Professor of Materials Science and Applied Physics.

The laws of thermodynamics govern the natural behavior of materials, such as the temperature at which water boils and the timing of chemical reactions. These same principles also determine how atoms in solids are arranged, and in the case of iron, nature changes its mind several times at high temperatures. At room temperature, the iron atoms are in an unusual loosely packed open arrangement; as iron is heated past 912 degrees Celsius, the atoms become more closely packed before loosening again at 1,394 degrees Celsius and ultimately melting at 1,538 degrees Celsius.

Iron is magnetic at room temperature, and previous work predicted that iron's magnetism favors its open structure at low temperatures, but at 770 degrees Celsius iron loses its magnetism. However, iron maintains its open structure for more than a hundred degrees beyond this magnetic transition. This led the researchers to believe that there must be something else contributing to iron's unusual thermodynamic properties.

For this missing link, graduate student Lisa Mauger and her colleagues needed to turn up the heat. Solids store heat as small atomic vibrations—vibrations that create disorder, or entropy. At high temperatures, entropy dominates thermodynamics, and atomic vibrations are the largest source of entropy in iron. By studying how these vibrations change as the temperature goes up and magnetism is lost, the researchers hoped to learn more about what is driving these structural rearrangements.

To do this, the team took its samples of iron to the High Pressure Collaborative Access Team beamline of the Advanced Photon Source at Argonne National Laboratory in Argonne, Illinois. This synchrotron facility produces intense flashes of x-rays that can be tuned to detect the quantum particles of atomic vibration—called phonon excitations—in iron.

When coupling these vibrational measurements with previously known data about the magnetic behavior of iron at these temperatures, the researchers found that iron's vibrational entropy was much larger than originally suspected. In fact, the excess was similar to the entropy contribution from magnetism—suggesting that magnetism and atomic vibrations interact synergistically at moderate temperatures. This excess entropy increases the stability of the iron's open structure even as the sample is heated past the magnetic transition.

The technique allowed the researchers to conclude, experimentally and for the first time, that magnons—the quantum particles of electron spin (magnetism)—and phonons interact to increase iron's stability at high temperatures.

Because the Caltech group's measurements matched up with the theoretical calculations that were simultaneously being developed by collaborators in the laboratory of Jörg Neugebauer at the Max-Planck-Institut für Eisenforschung GmbH (MPIE), Mauger's results also contributed to the validation of a new computational model.

"It has long been speculated that the structural stability of iron is strongly related to an inherent coupling between magnetism and atomic motion," says Fritz Körmann, postdoctoral fellow at MPIE and the first author on the computational paper. "Actually finding this coupling, and that the data of our experimental colleagues and our own computational results are in such an excellent agreement, was indeed an exciting moment."

"Only by combining methods and expertise from various scientific fields such as quantum mechanics, statistical mechanics, and thermodynamics, and by using incredibly powerful supercomputers, it became possible to describe the complex dynamic phenomena taking place inside one of the technologically most used structural materials," says Neugebauer. "The newly gained insight of how thermodynamic stability is realized in iron will help to make the design of new steels more systematic."

For thousands of years, metallurgists have been working to make stronger steels in much the same way that you'd try to develop a recipe for the world's best cookie: guess and check. Steel begins with a base of standard ingredients—iron and carbon—much like a basic cookie batter begins with flour and butter. And just as you'd customize a cookie recipe by varying the amounts of other ingredients like spices and nuts, the properties of steel can be tuned by adding varying amounts of other elements, such as chromium and nickel.

With a better computational model for the thermodynamics of iron at different temperatures—one that takes into account the effects of both magnetism and atomic vibrations—metallurgists will now be able to more accurately predict the thermodynamic properties of iron alloys as they alter their recipes. 

The experimental work was published in a paper titled "Nonharmonic Phonons in α-Iron at High Temperatures," in the journal Physical Review B. In addition to Fultz and first author Mauger, other Caltech coauthors include Jorge Alberto Muñoz (PhD '13) and graduate student Sally June Tracy. The computational paper, "Temperature Dependent Magnon-Phonon Coupling in bcc Fe from Theory and Experiment," was coauthored by Fultz and Mauger, led by researchers at the Max Planck Institute, and published in the journal Physical Review Letters. Fultz's and Mauger's work was supported by funding from the U.S. Department of Energy.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Potassium Salt Outperforms Precious Metals As a Catalyst

A team of Caltech chemists has discovered a method for producing a group of silicon-containing organic chemicals without relying on expensive precious metal catalysts. Instead, the new technique uses as a catalyst a cheap, abundant chemical that is commonly found in chemistry labs around the world—potassium tert-butoxide—to help create a host of products ranging from new medicines to advanced materials. And it turns out that the potassium salt is more effective than state-of-the-art precious metal complexes at running very challenging chemical reactions.

"We have shown for the first time that you can efficiently make carbon–silicon bonds with a safe and inexpensive catalyst based on potassium rather than ultrarare precious metals like platinum, palladium, and iridium," says Anton Toutov, a graduate student working in the laboratory of Bob Grubbs, Caltech's Victor and Elizabeth Atkins Professor of Chemistry. "We're very excited because this new method is not only 'greener' and more efficient, but it is also thousands of times less expensive than what's currently out there for making useful chemical building blocks. This is a technology that the chemical industry could readily adopt."

The finding marks one of the first cases in which catalysis—the use of catalysts to make certain reactions occur faster, more readily, or at all—moves away from being a practice that is fundamentally unsustainable. While the precious metals in most catalysts are rare and could eventually run out, potassium is an abundant element on Earth.

The team describes its new "green" chemistry technique in the February 5 issue of the journal Nature. The lead authors on the paper are Toutov and Wen-bo (Boger) Liu, a postdoctoral scholar at Caltech. Toutov recently won the Dow Sustainability Innovation Student Challenge Award (SISCA) grand prize for this work, in a competition held at Caltech's Resnick Sustainability Institute.

"The first time I spoke about this at a conference, people were stunned," says Grubbs, corecipient of the 2005 Nobel Prize in Chemistry. "I added three slides about this chemistry to the end of my talk, and afterward it was all anyone wanted to talk about."

Coauthor Brian Stoltz, professor of chemistry at Caltech, says the reason for this strong response is that while the chemistry the catalyst drives is challenging, potassium tert-butoxide is so seemingly simple. The white, free-flowing powder—similar to common table salt in appearance—provides a straightforward and environmentally friendly way to run a reaction that involves replacing a carbon–hydrogen bond with a carbon–silicon bond to produce molecules known as organosilanes.

These organic molecules are of particular interest because they serve as powerful chemical building blocks for medicinal chemists to use in the creation of new pharmaceuticals. They also hold promise in the development of new materials for use in products such as LCD screens and organic solar cells, could be important in the development of new pesticides, and are being incorporated into novel medical imaging tools.

"To be able to do this type of reaction, which is one of the most-studied problems in the world of chemistry, with potassium tert-butoxide—a material that's not precious-metal based but still catalytically active—was a total shocker," Stoltz says.

The current project got its start a couple of years ago when coauthor Alexey Fedorov—then a postdoctoral scholar in the Grubbs lab (now at ETH Zürich)—was working on a completely different problem. He was trying to break carbon–oxygen bonds in biomass using simple silicon-containing compounds, metals, and potassium tert-butoxide, which is a common additive. During that process, he ran a control experiment—one without a metal catalyst—leaving only potassium tert-butoxide as the reagent. Remarkably, the reaction still worked. And when Toutov—who was working with Fedorov—analyzed the reaction further, he realized that in addition to the expected products, the reaction was making small amounts of organosilanes. This was unexpected since organosilanes are very challenging to produce.

"I thought that was impossible, so I went back and checked it many times," Toutov says. "Sure enough, it checked out!"

Bolstered by the finding, Toutov refined the reaction so that it would create only a single desired organosilane in high yield under mild conditions, with hydrogen gas as the only byproduct. Then he expanded the scope of the reaction to produce industrially useful chemicals such as molecules needed for new materials and derivatives of pharmaceutical substances.

Having demonstrated the broad applicability of the reaction, Toutov teamed up with Liu from Stoltz's group to further develop the chemistry for the synthesis of building blocks relevant to the preparation of new human medicines, a field in which Stoltz has been active for over a decade.

But before delving too deeply into additional applications, the chemists sought the assistance of Nathan Dalleska, director of the Environmental Analysis Center in the Ronald and Maxine Linde Center for Global Environmental Science at Caltech to perform one more test with a mass spectrometer that geologists use to detect extremely minute quantities of metals. They were trying to detect some tiny amount of those precious metals that could be contaminating their experiments—something that might explain why they were getting these seemingly impossible results from potassium tert-butoxide alone.

"But there was nothing there," says Stoltz. "We made our own potassium tert-butoxide and also bought it from various vendors, and yet the chemistry continued to work just the same. We had to really convince ourselves that it was true, that there were no precious metals in there. Eventually, we had to just decide to believe it."

So far, the chemists do not know why the simple catalyst is able to drive these complex reactions. But Stoltz's lab is part of the Center for Selective C–H Functionalization, a National Science Foundation–funded Center for Chemical Innovation that involves 23 research groups from around the country. Through that center, the Caltech team has started working with Ken Houk's computational chemistry group at UCLA to investigate how the chemistry works from a mechanistic standpoint.

"It's pretty clear that it's functioning by a mechanism that is totally different than the way a precious metal would behave," says Stoltz. "That's going to inspire some people, including ourselves hopefully, to think about how to use and harness that reactivity."

Toutov says that unlike some other catalysts that stop working or become sensitive to air or water when scaled up from the single-gram scale, this new catalyst seems to be robust enough to be used at large, industrial scales. To demonstrate the industrial viability of the process, the Caltech team used the method to synthesize nearly 150 grams of a valuable organosilane—the largest amount of this chemical product that has been produced by a single catalytic reaction. The reaction required no solvent, generated hydrogen gas as the only byproduct, and proceeded at 45°C—the lowest reported temperature at which this reaction has successfully run, to date.

"This discovery just shows how little we in fact know about chemistry," says Stoltz. "People constantly try to tell us how mature our field is, but there is so much fundamental chemistry that we still don't understand."

Kerry Betz, an undergraduate student at Caltech, is a coauthor on the paper, "Silylation of C–H bonds in aromatic heterocycles by an Earth-abundant metal catalyst." The work was supported by the National Science Foundation. The Resnick Sustainability Institute at Caltech, Dow Chemical, the Natural Sciences and Engineering Research Council of Canada, and the Shanghai Institute of Organic Chemistry provided graduate and postdoctoral support. Fedorov's work on the original reaction was supported by BP. 

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Abundant Salt Makes High-Performing Catalyst
Listing Title: 
Abundant Salt Makes High-Performing Catalyst
Contact: 
Writer: 
Exclude from News Hub: 
No
Short Title: 
A Greener Catalysis
News Type: 
Research News

Gravitational Waves from Early Universe Remain Elusive

A joint analysis of data from the Planck space mission and the ground-based experiment BICEP2 has found no conclusive evidence of gravitational waves from the birth of our universe, despite earlier reports of a possible detection. The collaboration between the teams has resulted in the most precise knowledge yet of what signals from the ancient gravitational waves should look like, aiding future searches.

Read the full story at JPL News

Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news