An Earthquake Warning System in Our Pockets?

Researchers Test Smartphones for Advance-Notice System

While you are checking your email, scrolling through social-media feeds, or just going about your daily life with your trusty smartphone in your pocket, the sensors in that little computer could also be contributing to an earthquake early warning system. So says a new study led by researchers at Caltech and the United States Geological Survey (USGS). The study suggests that all of our phones and other personal electronic devices could function as a distributed network, detecting any ground movements caused by a large earthquake, and, ultimately, giving people crucial seconds to prepare for a temblor.

"Crowd-sourced alerting means that the community will benefit by data generated by the community," said Sarah Minson (PhD '10), a USGS geophysicist and lead author of the study, which appears in the April 10 issue of the new journal Science Advances. Minson completed the work while a postdoctoral scholar at Caltech in the laboratory of Thomas Heaton, professor of engineering seismology.

Earthquake early warning (EEW) systems detect the start of an earthquake and rapidly transmit warnings to people and automated systems before they experience shaking at their location. While much of the world's population is susceptible to damaging earthquakes, EEW systems are currently operating in only a few regions around the globe, including Japan and Mexico. "Most of the world does not receive earthquake warnings mainly due to the cost of building the necessary scientific monitoring networks," says USGS geophysicist and project lead Benjamin Brooks.

Despite being less accurate than scientific-grade equipment, the GPS receivers in smartphones are sufficient to detect the permanent ground movement, or displacement, caused by fault motion in earthquakes that are approximately magnitude 7 and larger. And, of course, they are already widely distributed. Once displacements are detected by participating users' phones, the collected information could be analyzed quickly in order to produce customized earthquake alerts that would then be transmitted back to users.

"Thirty years ago it took months to assemble a crude picture of the deformations from an earthquake. This new technology promises to provide a near-instantaneous picture with much greater resolution," says Heaton, a coauthor of the new study.

In the study, the researchers tested the feasibility of crowd-sourced EEW with a simulation of a hypothetical magnitude 7 earthquake, and with real data from the 2011 magnitude 9 Tohoku-oki, Japan earthquake. The results show that crowd-sourced EEW could be achieved with only a tiny percentage of people in a given area contributing information from their smartphones. For example, if phones from fewer than 5,000 people in a large metropolitan area responded, the earthquake could be detected and analyzed fast enough to issue a warning to areas farther away before the onset of strong shaking.

The researchers note that the GPS receivers in smartphones and similar devices would not be sufficient to detect earthquakes smaller than magnitude 7, which could still be potentially damaging. However, smartphones also have microelectromechanical systems (MEMS) accelerometers that are capable of recording any earthquake motions large enough to be felt; this means that smartphones may be useful in earthquakes as small as magnitude 5. In a separate project, Caltech's Community Seismic Network Project has been developing the framework to record and utilize data from an inexpensive array of such MEMS accelerometers.

Comprehensive EEW requires a dense network of scientific instruments. Scientific-grade EEW, such as the USGS's ShakeAlert system that is currently being implemented on the west coast of the United States, will be able to help minimize the impact of earthquakes over a wide range of magnitudes. However, in many parts of the world where there are insufficient resources to build and maintain scientific networks but consumer electronics are increasingly common, crowd-sourced EEW has significant potential.

"The U.S. earthquake early warning system is being built on our high-quality scientific earthquake networks, but crowd-sourced approaches can augment our system and have real potential to make warnings possible in places that don't have high-quality networks," says Douglas Given, USGS coordinator of the ShakeAlert Earthquake Early Warning System. The U.S. Agency for International Development has already agreed to fund a pilot project, in collaboration with the Chilean Centro Sismólogico Nacional, to test a pilot hybrid earthquake warning system comprising stand-alone smartphone sensors and scientific-grade sensors along the Chilean coast.

"Crowd-sourced data are less precise, but for larger earthquakes that cause large shifts in the ground surface, they contain enough information to detect that an earthquake has occurred, information necessary for early warning," says study coauthor Susan Owen of JPL.

Additional coauthors on the paper, "Crowdsourced earthquake early warning," are from the USGS, Carnegie Mellon University–Silicon Valley, and the University of Houston. The work was supported in part by the Gordon and Betty Moore Foundation, the USGS Innovation Center for Earth Sciences, and the U.S. Department of Transportation Office of the Assistant Secretary for Research and Technology.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Explaining Saturn’s Great White Spots

Every 20 to 30 years, Saturn's atmosphere roils with giant, planet-encircling thunderstorms that produce intense lightning and enormous cloud disturbances. The head of one of these storms—popularly called "great white spots," in analogy to the Great Red Spot of Jupiter—can be as large as Earth. Unlike Jupiter's spot, which is calm at the center and has no lightning, the Saturn spots are active in the center and have long tails that eventually wrap around the planet.

Six such storms have been observed on Saturn over the past 140 years, alternating between the equator and midlatitudes, with the most recent emerging in December 2010 and encircling the planet within six months. The storms usually occur when Saturn's northern hemisphere is most tilted toward the sun. Just what triggers them and why they occur so infrequently, however, has been unclear.

Now, a new study by two Caltech planetary scientists suggests a possible cause for these storms. The study was published April 13 in the advance online issue of the journal Nature Geoscience.

Using numerical modeling, Professor of Planetary Science Andrew Ingersoll and his graduate student Cheng Li simulated the formation of the storms and found that they may be caused by the weight of the water molecules in the planet's atmosphere. Because these water molecules are heavy compared to the hydrogen and helium that comprise most of the gas-giant planet's atmosphere, they make the upper atmosphere lighter when they rain out, and that suppresses convection.

Over time, this leads to a cooling of the upper atmosphere. But that cooling eventually overrides the suppressed convection, and warm moist air rapidly rises and triggers a thunderstorm. "The upper atmosphere is so cold and so massive that it takes 20 to 30 years for this cooling to trigger another storm," says Ingersoll.

Ingersoll and Li found that this mechanism matches observations of the great white spot of 2010 taken by NASA's Cassini spacecraft, which has been observing Saturn and its moons since 2004.

The researchers also propose that the absence of planet-encircling storms on Jupiter could be explained if Jupiter's atmosphere contains less water vapor than Saturn's atmosphere. That is because saturated gas (gas that contains the maximum amount of moisture that it can hold at a particular temperature) in a hydrogen-helium atmosphere goes through a density minimum as it cools. That is, it first becomes less dense as the water precipitates out, and then it becomes more dense as cooling proceeds further. "Going through that minimum is key to suppressing the convection, but there has to be enough water vapor to start with," says Li.

Ingersoll and Li note that observations by the Galileo spacecraft and the Hubble Space Telescope indicate that Saturn does indeed have enough water to go through this density minimum, whereas Jupiter does not. In November 2016, NASA's Juno spacecraft, now en route to Jupiter, will start measuring the water abundance on that planet. "That should help us understand not only the meteorology but also the planet's formation, since water is expected to be the third most abundant molecule after hydrogen and helium in a giant planet atmosphere," Ingersoll says.

The work in the paper, "Moist convection in hydrogen atmospheres and the frequency of Saturn's giant storms," was supported by the National Science Foundation and the Cassini Project of NASA.

Writer: 
Kathy Svitil
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Microbes Help Produce Serotonin in Gut

Although serotonin is well known as a brain neurotransmitter, it is estimated that 90 percent of the body's serotonin is made in the digestive tract. In fact, altered levels of this peripheral serotonin have been linked to diseases such as irritable bowel syndrome, cardiovascular disease, and osteoporosis. New research at Caltech, published in the April 9 issue of the journal Cell, shows that certain bacteria in the gut are important for the production of peripheral serotonin.

"More and more studies are showing that mice or other model organisms with changes in their gut microbes exhibit altered behaviors," explains Elaine Hsiao, research assistant professor of biology and biological engineering and senior author of the study. "We are interested in how microbes communicate with the nervous system. To start, we explored the idea that normal gut microbes could influence levels of neurotransmitters in their hosts."

Peripheral serotonin is produced in the digestive tract by enterochromaffin (EC) cells and also by particular types of immune cells and neurons. Hsiao and her colleagues first wanted to know if gut microbes have any effect on serotonin production in the gut and, if so, in which types of cells. They began by measuring peripheral serotonin levels in mice with normal populations of gut bacteria and also in germ-free mice that lack these resident microbes.

The researchers found that the EC cells from germ-free mice produced approximately 60 percent less serotonin than did their peers with conventional bacterial colonies. When these germ-free mice were recolonized with normal gut microbes, the serotonin levels went back up—showing that the deficit in serotonin can be reversed.

"EC cells are rich sources of serotonin in the gut. What we saw in this experiment is that they appear to depend on microbes to make serotonin—or at least a large portion of it," says Jessica Yano, first author on the paper and a research technician working with Hsiao.

The researchers next wanted to find out whether specific species of bacteria, out of the diverse pool of microbes that inhabit the gut, are interacting with EC cells to make serotonin.

After testing several different single species and groups of known gut microbes, Yano, Hsiao, and colleagues observed that one condition—the presence of a group of approximately 20 species of spore-forming bacteria—elevated serotonin levels in germ-free mice. The mice treated with this group also showed an increase in gastrointestinal motility compared to their germ-free counterparts, and changes in the activation of blood platelets, which are known to use serotonin to promote clotting.

Wanting to home in on mechanisms that could be involved in this interesting collaboration between microbe and host, the researchers began looking for molecules that might be key. They identified several particular metabolites—products of the microbes' metabolism—that were regulated by spore-forming bacteria and that elevated serotonin from EC cells in culture. Furthermore, increasing these metabolites in germ-free mice increased their serotonin levels.

Previous work in the field indicated that some bacteria can make serotonin all by themselves. However, this new study suggests that much of the body's serotonin relies on particular bacteria that interact with the host to produce serotonin, says Yano. "Our work demonstrates that microbes normally present in the gut stimulate host intestinal cells to produce serotonin," she explains.

"While the connections between the microbiome and the immune and metabolic systems are well appreciated, research into the role gut microbes play in shaping the nervous system is an exciting frontier in the biological sciences," says Sarkis K. Mazmanian, Luis B. and Nelly Soux Professor of Microbiology and a coauthor on the study. "This work elegantly extends previous seminal research from Caltech in this emerging field".

Additional coauthor Rustem Ismagilov, the Ethel Wilson Bowles and Robert Bowles Professor of Chemistry and Chemical Engineering, adds, "This work illustrates both the richness of chemical interactions between the hosts and their microbial communities, and Dr. Hsiao's scientific breadth and acumen in leading this work."

Serotonin is important for many aspects of human health, but Hsiao cautions that much more research is needed before any of these findings can be translated to the clinic.

"We identified a group of bacteria that, aside from increasing serotonin, likely has other effects yet to be explored," she says. "Also, there are conditions where an excess of peripheral serotonin appears to be detrimental."

Although this study was limited to serotonin in the gut, Hsiao and her team are now investigating how this mechanism might also be important for the developing brain. "Serotonin is an important neurotransmitter and hormone that is involved in a variety of biological processes. The finding that gut microbes modulate serotonin levels raises the interesting prospect of using them to drive changes in biology," says Hsiao.

The work was published in an article titled "Indigenous Bacteria from the Gut Microbiota Regulate Host Serotonin Biosynthesis." In addition to Hsiao, Yano, Mazmanian, and Ismagilov, other Caltech coauthors include undergraduates Kristie Yu, Gauri Shastri, and Phoebe Ann; graduate student Gregory Donaldson; postdoctoral scholar Liang Ma. Additional coauthor Cathryn Nagler is from the University of Chicago.

This work was funded by an NIH Director's Early Independence Award and a Caltech Center for Environmental Microbial Interactions Award, both to Hsiao. The study was also supported by NSF, NIDDK, and NIMH grants to Mazmanian, NSF EFRI and NHGRI grants to Ismagilov, and grants from the NIAID and Food Allergy Research and Education and University of Chicago Digestive Diseases Center Core to Nagler.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

New Camera Chip Provides Superfine 3-D Resolution

Imagine you need to have an almost exact copy of an object. Now imagine that you can just pull your smartphone out of your pocket, take a snapshot with its integrated 3-D imager, send it to your 3-D printer, and within minutes you have reproduced a replica accurate to within microns of the original object. This feat may soon be possible because of a new, tiny high-resolution 3-D imager developed at Caltech.

Any time you want to make an exact copy of an object with a 3-D printer, the first step is to produce a high-resolution scan of the object with a 3-D camera that measures its height, width, and depth. Such 3-D imaging has been around for decades, but the most sensitive systems generally are too large and expensive to be used in consumer applications.

A cheap, compact yet highly accurate new device known as a nanophotonic coherent imager (NCI) promises to change that. Using an inexpensive silicon chip less than a millimeter square in size, the NCI provides the highest depth-measurement accuracy of any such nanophotonic 3-D imaging device.

The work, done in the laboratory of Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering in the Division of Engineering and Applied Science, is described in the February 2015 issue of Optics Express.

In a regular camera, each pixel represents the intensity of the light received from a specific point in the image, which could be near or far from the camera—meaning that the pixels provide no information about the relative distance of the object from the camera. In contrast, each pixel in an image created by the Caltech team's NCI provides both the distance and intensity information. "Each pixel on the chip is an independent interferometer—an instrument that uses the interference of light waves to make precise measurements—which detects the phase and frequency of the signal in addition to the intensity," says Hajimiri.



Three dimensional map of the hills and valleys on a U.S. penny obtained with the nano-photonic coherent imager at the distance of 0.5 meters.

The new chip utilizes an established detection and ranging technology called LIDAR, in which a target object is illuminated with scanning laser beams. The light that reflects off of the object is then analyzed based on the wavelength of the laser light used, and the LIDAR can gather information about the object's size and its distance from the laser to create an image of its surroundings. "By having an array of tiny LIDARs on our coherent imager, we can simultaneously image different parts of an object or a scene without the need for any mechanical movements within the imager," Hajimiri says.

Such high-resolution images and information provided by the NCI are made possible because of an optical concept known as coherence. If two light waves are coherent, the waves have the same frequency, and the peaks and troughs of light waves are exactly aligned with one another. In the NCI, the object is illuminated with this coherent light. The light that is reflected off of the object is then picked up by on-chip detectors, called grating couplers, that serve as "pixels," as the light detected from each coupler represents one pixel on the 3-D image. On the NCI chip, the phase, frequency, and intensity of the reflected light from different points on the object is detected and used to determine the exact distance of the target point.

Because the coherent light has a consistent frequency and wavelength, it is used as a reference with which to measure the differences in the reflected light. In this way, the NCI uses the coherent light as sort of a very precise ruler to measure the size of the object and the distance of each point on the object from the camera. The light is then converted into an electrical signal that contains intensity and distance information for each pixel—all of the information needed to create a 3-D image.

The incorporation of coherent light not only allows 3-D imaging with the highest level of depth-measurement accuracy ever achieved in silicon photonics, it also makes it possible for the device to fit in a very small size. "By coupling, confining, and processing the reflected light in small pipes on a silicon chip, we were able to scale each LIDAR element down to just a couple of hundred microns in size—small enough that we can form an array of 16 of these coherent detectors on an active area of 300 microns by 300 microns," Hajimiri says.

The first proof of concept of the NCI has only 16 coherent pixels, meaning that the 3-D images it produces can only be 16 pixels at any given instance. However, the researchers also developed a method for imaging larger objects by first imaging a four-pixel-by-four-pixel section, then moving the object in four-pixel increments to image the next section. With this method, the team used the device to scan and create a 3-D image of the "hills and valleys" on the front face of a U.S. penny—with micron-level resolution—from half a meter away.

In the future, Hajimiri says, the current array of 16 pixels could also be easily scaled up to hundreds of thousands. One day, by creating such vast arrays of these tiny LIDARs, the imager could be applied to a broad range of applications from very precise 3-D scanning and printing to helping driverless cars avoid collisions to improving motion sensitivity in superfine human machine interfaces, where the slightest movements of a patient's eyes and the most minute changes in a patient's heartbeat can be detected on the fly.

"The small size and high quality of this new chip-based imager will result in significant cost reductions, which will enable thousands of new uses for such systems by incorporating them into personal devices such as smartphones," he says.

The study was published in a paper titled, "Nanophotonic coherent imager." In addition to Hajimiri, other Caltech coauthors include former postdoctoral scholar and current assistant professor at the University of Pennsylvania, Firooz Aflatouni, graduate student Behrooz Abiri, and Angad Rekhi (BS '14). This work was partially funded by Caltech Innovation Initiative.

Listing Title: 
Superfine 3-D Resolution on Your Smartphone?
Contact: 
Writer: 
Exclude from News Hub: 
No
Short Title: 
Superfine 3-D Resolution on Your Smartphone?
News Type: 
Research News

New Research Suggests Solar System May Have Once Harbored Super-Earths

Caltech and UC Santa Cruz Researchers Say Earth Belongs to a Second Generation of Planets

Long before Mercury, Venus, Earth, and Mars formed, it seems that the inner solar system may have harbored a number of super-Earths—planets larger than Earth but smaller than Neptune. If so, those planets are long gone—broken up and fallen into the sun billions of years ago largely due to a great inward-and-then-outward journey that Jupiter made early in the solar system's history.

This possible scenario has been suggested by Konstantin Batygin, a Caltech planetary scientist, and Gregory Laughlin of UC Santa Cruz in a paper that appears the week of March 23 in the online edition of the Proceedings of the National Academy of Sciences (PNAS). The results of their calculations and simulations suggest the possibility of a new picture of the early solar system that would help to answer a number of outstanding questions about the current makeup of the solar system and of Earth itself. For example, the new work addresses why the terrestrial planets in our solar system have such relatively low masses compared to the planets orbiting other sun-like stars.

"Our work suggests that Jupiter's inward-outward migration could have destroyed a first generation of planets and set the stage for the formation of the mass-depleted terrestrial planets that our solar system has today," says Batygin, an assistant professor of planetary science. "All of this fits beautifully with other recent developments in understanding how the solar system evolved, while filling in some gaps."

Thanks to recent surveys of exoplanets—planets in solar systems other than our own—we know that about half of sun-like stars in our galactic neighborhood have orbiting planets. Yet those systems look nothing like our own. In our solar system, very little lies within Mercury's orbit; there is only a little debris—probably near-Earth asteroids that moved further inward—but certainly no planets. That is in sharp contrast with what astronomers see in most planetary systems. These systems typically have one or more planets that are substantially more massive than Earth orbiting closer to their suns than Mercury does, but very few objects at distances beyond.

"Indeed, it appears that the solar system today is not the common representative of the galactic planetary census. Instead we are something of an outlier," says Batygin. "But there is no reason to think that the dominant mode of planet formation throughout the galaxy should not have occurred here. It is more likely that subsequent changes have altered its original makeup."

According to Batygin and Laughlin, Jupiter is critical to understanding how the solar system came to be the way it is today. Their model incorporates something known as the Grand Tack scenario, which was first posed in 2001 by a group at Queen Mary University of London and subsequently revisited in 2011 by a team at the Nice Observatory. That scenario says that during the first few million years of the solar system's lifetime, when planetary bodies were still embedded in a disk of gas and dust around a relatively young sun, Jupiter became so massive and gravitationally influential that it was able to clear a gap in the disk. And as the sun pulled the disk's gas in toward itself, Jupiter also began drifting inward, as though carried on a giant conveyor belt.

"Jupiter would have continued on that belt, eventually being dumped onto the sun if not for Saturn," explains Batygin. Saturn formed after Jupiter but got pulled toward the sun at a faster rate, allowing it to catch up. Once the two massive planets got close enough, they locked into a special kind of relationship called an orbital resonance, where their orbital periods were rational—that is, expressible as a ratio of whole numbers. In a 2:1 orbital resonance, for example, Saturn would complete two orbits around the sun in the same amount of time that it took Jupiter to make a single orbit. In such a relationship, the two bodies would begin to exert a gravitational influence on one another.

"That resonance allowed the two planets to open up a mutual gap in the disk, and they started playing this game where they traded angular momentum and energy with one another, almost to a beat," says Batygin. Eventually, that back and forth would have caused all of the gas between the two worlds to be pushed out, a situation that would have reversed the planets' migration direction and sent them back outward in the solar system. (Hence, the "tack" part of the Grand Tack scenario: the planets migrate inward and then change course dramatically, something like a boat tacking around a buoy.)

In an earlier model developed by Bradley Hansen at UCLA, the terrestrial planets conveniently end up in their current orbits with their current masses under a particular set of circumstances—one in which all of the inner solar system's planetary building blocks, or planetesimals, happen to populate a narrow ring stretching from 0.7 to 1 astronomical unit (1 astronomical unit is the average distance from the sun to Earth), 10 million years after the sun's formation. According to the Grand Tack scenario, the outer edge of that ring would have been delineated by Jupiter as it moved toward the sun on its conveyor belt and cleared a gap in the disk all the way to Earth's current orbit.

But what about the inner edge? Why should the planetesimals be limited to the ring on the inside? "That point had not been addressed," says Batygin.

He says the answer could lie in primordial super-Earths. The empty hole of the inner solar system corresponds almost exactly to the orbital neighborhood where super-Earths are typically found around other stars. It is therefore reasonable to speculate that this region was cleared out in the primordial solar system by a group of first-generation planets that did not survive.

Batygin and Laughlin's calculations and simulations show that as Jupiter moved inward, it pulled all the planetesimals it encountered along the way into orbital resonances and carried them toward the sun. But as those planetesimals got closer to the sun, their orbits also became elliptical. "You cannot reduce the size of your orbit without paying a price, and that turns out to be increased ellipticity," explains Batygin. Those new, more elongated orbits caused the planetesimals, mostly on the order of 100 kilometers in radius, to sweep through previously unpenetrated regions of the disk, setting off a cascade of collisions among the debris. In fact, Batygin's calculations show that during this period, every planetesimal would have collided with another object at least once every 200 years, violently breaking them apart and sending them decaying into the sun at an increased rate.

The researchers did one final simulation to see what would happen to a population of super-Earths in the inner solar system if they were around when this cascade of collisions started. They ran the simulation on a well-known extrasolar system known as Kepler-11, which features six super-Earths with a combined mass 40 times that of Earth, orbiting a sun-like star. The result? The model predicts that the super-Earths would be shepherded into the sun by a decaying avalanche of planetesimals over a period of 20,000 years.

"It's a very effective physical process," says Batygin. "You only need a few Earth masses worth of material to drive tens of Earth masses worth of planets into the sun."

Batygin notes that when Jupiter tacked around, some fraction of the planetesimals it was carrying with it would have calmed back down into circular orbits. Only about 10 percent of the material Jupiter swept up would need to be left behind to account for the mass that now makes up Mercury, Venus, Earth, and Mars.

From that point, it would take millions of years for those planetesimals to clump together and eventually form the terrestrial planets—a scenario that fits nicely with measurements that suggest that Earth formed 100–200 million years after the birth of the sun. Since the primordial disk of hydrogen and helium gas would have been long gone by that time, this could also explain why Earth lacks a hydrogen atmosphere. "We formed from this volatile-depleted debris," says Batygin.

And that sets us apart in another way from the majority of exoplanets. Batygin expects that most exoplanets—which are mostly super-Earths—have substantial hydrogen atmospheres, because they formed at a point in the evolution of their planetary disk when the gas would have still been abundant. "Ultimately, what this means is that planets truly like Earth are intrinsically not very common," he says.

The paper also suggests that the formation of gas giant planets such as Jupiter and Saturn—a process that planetary scientists believe is relatively rare—plays a major role in determining whether a planetary system winds up looking something like our own or like the more typical systems with close-in super-Earths. As planet hunters identify additional systems that harbor gas giants, Batygin and Laughlin will have more data against which they can check their hypothesis—to see just how often other migrating giant planets set off collisional cascades in their planetary systems, sending primordial super-Earths into their host stars.

 The researchers describe their work in a paper titled "Jupiter's Decisive Role in the Inner Solar System's Early Evolution."

Writer: 
Kimm Fesenmaier
Frontpage Title: 
Our Solar System May Have Once Harbored Super-Earths
Listing Title: 
Our Solar System May Have Once Harbored Super-Earths
Writer: 
Exclude from News Hub: 
No
Short Title: 
Super-Earths In Our Solar System?
News Type: 
Research News

Caltech Scientists Develop Cool Process to Make Better Graphene

A new technique invented at Caltech to produce graphene—a material made up of an atom-thick layer of carbon—at room temperature could help pave the way for commercially feasible graphene-based solar cells and light-emitting diodes, large-panel displays, and flexible electronics.

"With this new technique, we can grow large sheets of electronic-grade graphene in much less time and at much lower temperatures," says Caltech staff scientist David Boyd, who developed the method.

Boyd is the first author of a new study, published in the March 18 issue of the journal Nature Communications, detailing the new manufacturing process and the novel properties of the graphene it produces.

Graphene could revolutionize a variety of engineering and scientific fields due to its unique properties, which include a tensile strength 200 times stronger than steel and an electrical mobility that is two to three orders of magnitude better than silicon. The electrical mobility of a material is a measure of how easily electrons can travel across its surface.

However, achieving these properties on an industrially relevant scale has proven to be complicated. Existing techniques require temperatures that are much too hot—1,800 degrees Fahrenheit, or 1,000 degrees Celsius—for incorporating graphene fabrication with current electronic manufacturing. Additionally, high-temperature growth of graphene tends to induce large, uncontrollably distributed strain—deformation—in the material, which severely compromises its intrinsic properties.   

"Previously, people were only able to grow a few square millimeters of high-mobility graphene at a time, and it required very high temperatures, long periods of time, and many steps," says Caltech physics professor Nai-Chang Yeh, the Fletcher Jones Foundation Co-Director of the Kavli Nanoscience Institute and the corresponding author of the new study. "Our new method can consistently produce high-mobility and nearly strain-free graphene in a single step in just a few minutes without high temperature. We have created sample sizes of a few square centimeters, and since we think that our method is scalable, we believe that we can grow sheets that are up to several square inches or larger, paving the way to realistic large-scale applications."

The new manufacturing process might not have been discovered at all if not for a fortunate turn of events. In 2012, Boyd, then working in the lab of the late David Goodwin, at that time a Caltech professor of mechanical engineering and applied physics, was trying to reproduce a graphene-manufacturing process he had read about in a scientific journal. In this process, heated copper is used to catalyze graphene growth. "I was playing around with it on my lunch hour," says Boyd, who now works with Yeh's research group. "But the recipe wasn't working. It seemed like a very simple process. I even had better equipment than what was used in the original experiment, so it should have been easier for me."

During one of his attempts to reproduce the experiment, the phone rang. While Boyd took the call, he unintentionally let a copper foil heat for longer than usual before exposing it to methane vapor, which provides the carbon atoms needed for graphene growth.

When later Boyd examined the copper plate using Raman spectroscopy, a technique used for detecting and identifying graphene, he saw evidence that a graphene layer had indeed formed. "It was an 'A-ha!' moment," Boyd says. "I realized then that the trick to growth is to have a very clean surface, one without the copper oxide."

As Boyd recalls, he then remembered that Robert Millikan, a Nobel Prize–winning physicist and the head of Caltech from 1921 to 1945, also had to contend with removing copper oxide when he performed his famous 1916 experiment to measure Planck's constant, which is important for calculating the amount of energy a single particle of light, or photon, contains. Boyd wondered if he, like Millikan, could devise a method for cleaning his copper while it was under vacuum conditions.



Schematic of the Caltech growth process for graphene.
(Courtesy of Nature Communications)

The solution Boyd hit upon was to use a system first developed in the 1960s to generate a hydrogen plasma—that is, hydrogen gas that has been electrified to separate the electrons from the protons—to remove the copper oxide at much lower temperatures. His initial experiments revealed not only that the technique worked to remove the copper oxide, but that it simultaneously produced graphene as well.

At first, Boyd could not figure out why the technique was so successful. He later discovered that two leaky valves were letting in trace amounts of methane into the experiment chamber. "The valves were letting in just the right amount of methane for graphene to grow," he says.

The ability to produce graphene without the need for active heating not only reduces manufacturing costs, but also results in a better product because fewer defects—introduced as a result of thermal expansion and contraction processes—are generated. This in turn eliminates the need for multiple postproduction steps. "Typically, it takes about ten hours and nine to ten different steps to make a batch of high-mobility graphene using high-temperature growth methods," Yeh says. "Our process involves one step, and it takes five minutes."

Work by Yeh's group and international collaborators later revealed that graphene made using the new technique is of higher quality than graphene made using conventional methods: It is stronger because it contains fewer defects that could weaken its mechanical strength, and it has the highest electrical mobility yet measured for synthetic graphene.



Images of early-stage growth of graphene on copper. The lines of hexagons are graphene nuclei, with increasing magnification from left to right, where the scale bars from left to right correspond to 10 μm, 1 μm, and 200 nm, respectively. The hexagons grow together into a seamless sheet of graphene. (Courtesy of Nature Communications)

The team thinks one reason their technique is so efficient is that a chemical reaction between the hydrogen plasma and air molecules in the chamber's atmosphere generates cyano radicals—carbon-nitrogen molecules that have been stripped of their electrons. Like tiny superscrubbers, these charged molecules effectively scour the copper of surface imperfections providing a pristine surface on which to grow graphene.

The scientists also discovered that their graphene grows in a special way. Graphene produced using conventional thermal processes grows from a random patchwork of depositions. But graphene growth with the plasma technique is more orderly. The graphene deposits form lines that then grow into a seamless sheet, which contributes to its mechanical and electrical integrity.

A scaled-up version of their plasma technique could open the door for new kinds of electronics manufacturing, Yeh says. For example, graphene sheets with low concentrations of defects could be used to protect materials against degradation from exposure to the environment. Another possibility would be to grow large sheets of graphene that can be used as a transparent conducting electrode for solar cells and display panels. "In the future, you could have graphene-based cell-phone displays that generate their own power," Yeh says.



Atomically resolved scanning tunneling microscopic images of graphene grown on a copper (111) single crystal, with increasing magnification from left to right. (Courtesy of Nature Communications)

Another possibility, she says, is to introduce intentional imperfections into graphene's lattice structure to create specific mechanical and electronic attributes. "If you can strain graphene by design at the nanoscale, you can artificially engineer its properties. But for this to work, you need to start with a perfectly smooth, strain-free sheet of graphene," Yeh says. "You can't do this if you have a sheet of graphene that has uncontrollable defects in different places."

Along with Yeh and Boyd, additional authors on the paper, "Single-Step Deposition of High-Mobility Graphene at Reduced Temperatures," include Caltech graduate students Wei Hsiang Lin, Chen Chih Hsu and Chien-Chang Chen; Caltech staff scientist Marcus Teague; Yuan-Yen Lo, Tsung-Chih Cheng, and Chih-I Wu of National Taiwan University; and Wen-Yuan Chan, Wei-Bing Su, and Chia-Seng Chang of the Institute of Physics, Academia Sinica. Funding support for the study at Caltech was provided by the National Science Foundation, under the Institute of Quantum Information and Matter, and by the Gordon and Betty Moore Foundation and the Kavli Foundation through the Kavli Nanoscience Institute. The work in Taiwan was supported by the Taiwanese National Science Council.

Images reprinted from Nature Communications, "Single-Step Deposition of High-Mobility Graphene at Reduced Temperatures," March 18, 2015, with permission from Nature Communications.

Frontpage Title: 
A Cool Process to Make Better Graphene
Listing Title: 
A Cool Process to Make Better Graphene
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Friction Means Antarctic Glaciers More Sensitive to Climate Change Than We Thought

One of the biggest unknowns in understanding the effects of climate change today is the melting rate of glacial ice in Antarctica. Scientists agree rising atmospheric and ocean temperatures could destabilize these ice sheets, but there is uncertainty about how fast they will lose ice.

The West Antarctic Ice Sheet is of particular concern to scientists because it contains enough ice to raise global sea level by up to 16 feet, and its physical configuration makes it susceptible to melting by warm ocean water. Recent studies have suggested that the collapse of certain parts of the ice sheet is inevitable. But will that process take several decades or centuries?

Research by Caltech scientists now suggests that estimates of future rates of melt for the West Antarctic Ice Sheet—and, by extension, of future sea-level rise—have been too conservative. In a new study, published online on March 9 in the Journal of Glaciology, a team led by Victor Tsai, an assistant professor of geophysics, found that properly accounting for Coulomb friction—a type of friction generated by solid surfaces sliding against one another—in computer models significantly increases estimates of how sensitive the ice sheet is to temperature perturbations driven by climate change.

Unlike other ice sheets that are moored to land above the ocean, most of West Antarctica's ice sheet is grounded on a sloping rock bed that lies below sea level. In the past decade or so, scientists have focused on the coastal part of the ice sheet where the land ice meets the ocean, called the "grounding line," as vital for accurately determining the melting rate of ice in the southern continent.

"Our results show that the stability of the whole ice sheet and our ability to predict its future melting is extremely sensitive to what happens in a very small region right at the grounding line. It is crucial to accurately represent the physics here in numerical models," says study coauthor Andrew Thompson, an assistant professor of environmental science and engineering at Caltech.

Part of the seafloor on which the West Antarctic Ice Sheet rests slopes upward toward the ocean in what scientists call a "reverse slope gradient." The end of the ice sheet also floats on the ocean surface so that ocean currents can deliver warm water to its base and melt the ice from below. Scientists think this "basal melting" could cause the grounding line to retreat inland, where the ice sheet is thicker. Because ice thickness is a key factor in controlling ice discharge near the coast, scientists worry that the retreat of the grounding line could accelerate the rate of interior ice flow into the oceans. Grounding line recession also contributes to the thinning and melting away of the region's ice shelves—thick, floating extensions of the ice sheet that help reduce the flow of ice into the sea.

According to Tsai, many earlier models of ice sheet dynamics tried to simplify calculations by assuming that ice loss is controlled solely by viscous stresses, that is, forces that apply to "sticky fluids" such as honey—or in this case, flowing ice. The conventional models thus accounted for the flow of ice around obstacles but ignored friction. "Accounting for frictional stresses at the ice sheet bottom in addition to the viscous stresses changes the physical picture dramatically," Tsai says.

In their new study, Tsai's team used computer simulations to show that even though Coulomb friction affects only a relatively small zone on an ice sheet, it can have a big impact on ice stream flow and overall ice sheet stability.

In most previous models, the ice sheet sits firmly on the bed and generates a downward stress that helps keep it attached it to the seafloor. Furthermore, the models assumed that this stress remains constant up to the grounding line, where the ice sheet floats, at which point the stress disappears.

Tsai and his team argue that their model provides a more realistic representation—in which the stress on the bottom of the ice sheet gradually weakens as one approaches the coasts and grounding line, because the weight of the ice sheet is increasingly counteracted by water pressure at the glacier base. "Because a strong basal shear stress cannot occur in the Coulomb model, it completely changes how the forces balance at the grounding line," Thompson says.

Tsai says the idea of investigating the effects of Coulomb friction on ice sheet dynamics came to him after rereading a classic study on the topic by American metallurgist and glaciologist Johannes Weertman from Northwestern University. "I wondered how might the behavior of the ice sheet differ if one factored in this water-pressure effect from the ocean, which Weertman didn't know would be important when he published his paper in 1974," Tsai says.

Tsai thought about how this could be achieved and realized the answer might lie in another field in which he is actively involved: earthquake research. "In seismology, Coulomb friction is very important because earthquakes are thought to be the result of the edge of one tectonic plate sliding against the edge of another plate frictionally," Tsai said. "This ice sheet research came about partly because I'm working on both glaciology and earthquakes."

If the team's Coulomb model is correct, it could have important implications for predictions of ice loss in Antarctica as a result of climate change. Indeed, for any given increase in temperature, the model predicts a bigger change in the rate of ice loss than is forecasted in previous models. "We predict that the ice sheets are more sensitive to perturbations such as temperature," Tsai says.

Hilmar Gudmundsson, a glaciologist with the British Antarctic Survey in Cambridge, UK, called the team's results "highly significant." "Their work gives further weight to the idea that a marine ice sheet, such as the West Antarctic Ice Sheet, is indeed, or at least has the potential to become, unstable," says Gudmundsson, who was not involved in the study.

Glaciologist Richard Alley, of Pennsylvania State University, noted that historical studies have shown that ice sheets can remain stable for centuries or millennia and then switch to a different configuration suddenly.

"If another sudden switch happens in West Antarctica, sea level could rise a lot, so understanding what is going on at the grounding lines is essential," says Alley, who also did not participate in the research.

"Tsai and coauthors have taken another important step in solving this difficult problem," he says.

Along with Tsai and Thompson, Andrew Stewart, an assistant professor of atmospheric and oceanic sciences at UCLA, was also a coauthor on the paper, "Marine ice sheet profiles and stability under Coulomb basal conditions." Funding support for the study was provided by Caltech's President's and Director's Fund program and the Stanback Discovery Fund for Global Environmental Science.

Frontpage Title: 
Ice Sheets Melting Faster than Expected?
Listing Title: 
Ice Sheets Melting Faster than Expected?
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

One Step Closer to Artificial Photosynthesis and "Solar Fuels"

Caltech scientists, inspired by a chemical process found in leaves, have developed an electrically conductive film that could help pave the way for devices capable of harnessing sunlight to split water into hydrogen fuel.

When applied to semiconducting materials such as silicon, the nickel oxide film prevents rust buildup and facilitates an important chemical process in the solar-driven production of fuels such as methane or hydrogen.

"We have developed a new type of protective coating that enables a key process in the solar-driven production of fuels to be performed with record efficiency, stability, and effectiveness, and in a system that is intrinsically safe and does not produce explosive mixtures of hydrogen and oxygen," says Nate Lewis, the George L. Argyros Professor and professor of chemistry at Caltech and a coauthor of a new study, published the week of March 9 in the online issue of the journal the Proceedings of the National Academy of Sciences, that describes the film.

The development could help lead to safe, efficient artificial photosynthetic systems—also called solar-fuel generators or "artificial leaves"—that replicate the natural process of photosynthesis that plants use to convert sunlight, water, and carbon dioxide into oxygen and fuel in the form of carbohydrates, or sugars.

The artificial leaf that Lewis' team is developing in part at Caltech's Joint Center for Artificial Photosynthesis (JCAP) consists of three main components: two electrodes—a photoanode and a photocathode—and a membrane. The photoanode uses sunlight to oxidize water molecules to generate oxygen gas, protons, and electrons, while the photocathode recombines the protons and electrons to form hydrogen gas. The membrane, which is typically made of plastic, keeps the two gases separate in order to eliminate any possibility of an explosion, and lets the gas be collected under pressure to safely push it into a pipeline.

Scientists have tried building the electrodes out of common semiconductors such as silicon or gallium arsenide—which absorb light and are also used in solar panels—but a major problem is that these materials develop an oxide layer (that is, rust) when exposed to water.

Lewis and other scientists have experimented with creating protective coatings for the electrodes, but all previous attempts have failed for various reasons. "You want the coating to be many things: chemically compatible with the semiconductor it's trying to protect, impermeable to water, electrically conductive, highly transparent to incoming light, and highly catalytic for the reaction to make oxygen and fuels," says Lewis, who is also JCAP's scientific director. "Creating a protective layer that displayed any one of these attributes would be a significant leap forward, but what we've now discovered is a material that can do all of these things at once."

The team has shown that its nickel oxide film is compatible with many different kinds of semiconductor materials, including silicon, indium phosphide, and cadmium telluride. When applied to photoanodes, the nickel oxide film far exceeded the performance of other similar films—including one that Lewis's group created just last year. That film was more complicated—it consisted of two layers versus one and used as its main ingredient titanium dioxide (TiO2, also known as titania), a naturally occurring compound that is also used to make sunscreens, toothpastes, and white paint.

"After watching the photoanodes run at record performance without any noticeable degradation for 24 hours, and then 100 hours, and then 500 hours, I knew we had done what scientists had failed to do before," says Ke Sun, a postdoc in Lewis's lab and the first author of the new study.

Lewis's team developed a technique for creating the nickel oxide film that involves smashing atoms of argon into a pellet of nickel atoms at high speeds, in an oxygen-rich environment. "The nickel fragments that sputter off of the pellet react with the oxygen atoms to produce an oxidized form of nickel that gets deposited onto the semiconductor," Lewis says.

Crucially, the team's nickel oxide film works well in conjunction with the membrane that separates the photoanode from the photocathode and staggers the production of hydrogen and oxygen gases.

"Without a membrane, the photoanode and photocathode are close enough to each other to conduct electricity, and if you also have bubbles of highly reactive hydrogen and oxygen gases being produced in the same place at the same time, that is a recipe for disaster," Lewis says. "With our film, you can build a safe device that will not explode, and that lasts and is efficient, all at once."

Lewis cautions that scientists are still a long way off from developing a commercial product that can convert sunlight into fuel. Other components of the system, such as the photocathode, will also need to be perfected.

"Our team is also working on a photocathode," Lewis says. "What we have to do is combine both of these elements together and show that the entire system works. That will not be easy, but we now have one of the missing key pieces that has eluded the field for the past half-century."

Along with Lewis and Sun, additional authors on the paper, "Stable solar-driven oxidation of water by semiconducting photoanodes protected by transparent catalytic nickel oxide films," include Caltech graduate students Fadl Saadi, Michael Lichterman, Xinghao Zhou, Noah Plymale, and Stefan Omelchenko; William Hale, from the University of Southampton; Hsin-Ping Wang and Jr-Hau He, from King Abdullah University in Saudi Arabia; Kimberly Papadantonakis, a scientific research manager at Caltech; and Bruce Brunschwig, the director of the Molecular Materials Research Center at Caltech. Funding was provided by the Office of Science at the U.S. Department of Energy, the National Science Foundation, the Beckman Institute, and the Gordon and Betty Moore Foundation.

Frontpage Title: 
Thin Film Clears Path to Solar Fuels
Listing Title: 
Thin Film Clears Path to Solar Fuels
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Research Suggests Brain's Melatonin May Trigger Sleep

If you walk into your local drug store and ask for a supplement to help you sleep, you might be directed to a bottle labeled "melatonin." The hormone supplement's use as a sleep aid is supported by anecdotal evidence and even some reputable research studies. However, our bodies also make melatonin naturally, and until a recent Caltech study using zebrafish, no one knew how—or even if—this melatonin contributed to our natural sleep. The new work suggests that even in the absence of a supplement, naturally occurring melatonin may help us fall and stay asleep.

The study was published online in the March 5 issue of the journal Neuron.

"When we first tell people that we're testing whether melatonin is involved in sleep, the response is often, 'Don't we already know that?'" says Assistant Professor of Biology David Prober. "This is a reasonable response based on articles in newspapers and melatonin products available on the Internet. However, while some scientific studies show that supplemental melatonin can help to promote sleep, many studies failed to observe this, so the effectiveness of melatonin supplements is controversial. More importantly, these studies don't tell you anything about what naturally occurring melatonin normally does in the body."

There are several factors at play when you are starting to feel tired. Sleep is thought to be regulated by two mechanisms: a homeostatic mechanism, which responds to the body's internal cues for sleep, and a circadian mechanism that responds to external cues such as darkness and light, signaling appropriate times for sleep and wakefulness.

For years, researchers have known that melatonin production is regulated by the circadian clock, and that animals produce more of the hormone at night than they do during the day. However, this fact alone is not enough to prove that melatonin promotes sleep. For example, although nocturnal animals sleep during the day and are active at night, they also produce the most melatonin at night.

In the hopes of determining, once and for all, what role the hormone actually plays in sleep, Prober and his team at Caltech designed an experiment using the larvae of zebrafish, an organism commonly used in research studies because of its small size and well-characterized genome. Like humans, zebrafish are also diurnal—awake during the day and asleep at night—and produce melatonin at night.

But how exactly can you tell if a young zebrafish has fallen asleep? There are behavioral criteria—including how long a zebrafish takes to respond to a stimulus, like a knock on the tank, for example. "Based on these criteria, we found that if the zebrafish larvae don't move for one or more minutes, they are in a sleep-like state," Prober says.

To test the effect of naturally occurring melatonin on sleep, the researchers first compared the sleep patterns of normal, or "wild-type," zebrafish larvae to those of zebrafish larvae that are unable to produce the hormone because of a mutation in a gene called aanat2. They found that fish with the mutation slept only half as long as normal fish. And although a normal zebrafish begins to fall asleep about 10 minutes after "lights out"—about the same amount of time it takes a human to fall asleep—it took the aanat2 mutant fish about twice as long.

"This result was surprising because it suggests that almost half of the sleep that the larvae are getting at night is due to the effects of melatonin," Prober says. "That suggests that melatonin normally plays an important role in sleep and that you need this natural melatonin both to fall asleep and to stay asleep."

In both humans and zebrafish, melatonin is produced in a part of the brain called the pineal gland. To confirm that the mutation-induced reduction in sleep was actually due to a lack of melatonin, the researchers next used a drug to specifically kill the cells of the pineal gland, thus halting the hormone's production. The drug-treated fish showed the same reduction in sleep as fish with mutated aanat2. When the drug treatment stopped, allowing pineal gland cells to regenerate, the fish returned to a normal sleep pattern.

Sleep patterns, like many other biological and behavioral processes, are known to be regulated by the circadian clock. In an organism, the circadian clock aligns these processes with daily changes in the environment, such as daylight and darkness at night. However, while a great deal is known about how the circadian clock works, it was not known how the clock regulates sleep. Because the researchers had determined that melatonin is involved in promoting natural sleep, they next asked whether melatonin mediates the circadian regulation of sleep.

They first raised both wild-type and aanat2 mutant zebrafish larvae in a normal light/dark cycle—14 hours of light followed by 10 hours of darkness—to entrain their circadian clocks. Then, when the larvae were 5 days old, they switched both populations to an environment of constant darkness. In this "free running" condition, the circadian clock continues to function in the absence of daily light and dark signals from the environment. As expected, the wild-type fish maintained their regular circadian sleep cycle. The melatonin-lacking aanat2 mutants, however, showed no cyclical sleep patterns.  

"This was really surprising," says Prober. "For years, people have been looking in rodents for a factor that's required for the circadian regulation of sleep and have found a few other candidate molecules that, like melatonin, are regulated by the circadian clock and can induce sleep when given as supplements. However, mutants that lack these factors had normal circadian sleep cycles," says Prober. "One thought was that maybe all of these molecules work together and that you'd have to make mutations in multiple genes to see an effect. But we found that eliminating one molecule, melatonin, is the whole show. It's one of those rare and surprisingly clear results."

After finding that melatonin is necessary for the circadian regulation of sleep, Prober next wanted to ask how it does this. To find out, Prober and his colleagues looked to a neuromodulator called adenosine—part of the homeostatic mechanism that promotes sleep. As an animal expends energy throughout the day, adenosine accumulates in the brain causing the animal to feel more and more tired—a pressure that is relieved through sleep.

The researchers treated both wild-type and melatonin-deficient aanat2 mutant fish with drugs that activate adenosine signaling. They found that although the drugs had no effect on the wild-type fish, they restored normal sleep amounts in aanat2 mutants. This result suggests that melatonin may be promoting sleep, in part, by turning on adenosine—providing a long sought-after link between the homeostatic and circadian processes that regulate sleep.

Prober and his colleagues hypothesize that the circadian clock drives the production of melatonin, which then promotes sleep through yet-to-be-determined mechanisms while also stimulating adenosine production, thus promoting sleep through the homeostatic pathway. Although more experiments are needed to confirm this model, Prober says that the preliminary results may offer insights about human sleep as well.

"Zebrafish are vertebrates and their brain is structurally similar to ours. All of the markers that we and others have tested are expressed in the same regions of the zebrafish brain as in the mammalian brain," he says. "Zebrafish sleep and human sleep are likely different in some ways, but all of our drug and genetic data indicate that the same factors—working through the same mechanisms—have similar effects on sleep in zebrafish and mammals. "

Prober's work with the circadian regulation of sleep follows in the conceptual—and physical—footsteps of late Caltech geneticist Seymour Benzer, who founded genetic studies of the circadian clock. In experiments in fruit flies, Benzer and his graduate student, the late Ronald Konopka (PhD '72), discovered the first circadian-rhythm mutants. Benzer passed away in 2007, and when Prober came to Caltech in 2009, he was offered Benzer's former office and lab space. "Seymour Benzer's work in fruit flies launched the beginning of our understanding of the molecular circadian clock," Prober says, "so it's really special to be in this space, and it's gratifying that we're taking the next step based on his work."

The results of Prober's study are published in the journal Neuron in an article titled, "Melatonin is required for the circadian regulation of sleep." Other Caltech coauthors on the paper are graduate student Avni Gandhi and postdoctoral scholars Eric Mosser and Grigorios Oikonomou. This work was funded by grants from the National Institutes of Health, the Mallinckrodt Foundation, the Rita Allen Foundation, the Brain and Behavior Research Foundation as well as a Della Martin Postdoctoral Fellowship to Mosser.

Frontpage Title: 
Feeling Sleepy? Might be the Melatonin
Listing Title: 
Feeling Sleepy? Might be the Melatonin
Writer: 
Exclude from News Hub: 
No
Short Title: 
Feeling Sleepy? Might be the Melatonin
News Type: 
Research News

Fighting a Worm with Its Own Genome

Tiny parasitic hookworms infect nearly half a billion people worldwide—almost exclusively in developing countries—causing health problems ranging from gastrointestinal issues to cognitive impairment and stunted growth in children. By sequencing and analyzing the genome of one particular hookworm species, Caltech researchers have uncovered new information that could aid the fight against these parasites.  

The results of their work were published online in the March 2 issue of the journal Nature Genetics.

"Hookworms infect a huge percentage of the human population. Getting clean water and sanitation to the most affected regions would help to ameliorate hookworms and a number of other parasites, but since these are big, complicated challenges that are difficult to address, we need to also be working on drugs to treat them," says study lead Paul Sternberg, the Thomas Hunt Morgan Professor of Biology at Caltech and a Howard Hughes Medical Institute investigator.

Medicines have been developed to treat hookworm infections, but the parasites have begun to develop resistance to these drugs. As part of the search for effective new drugs, Sternberg and his colleagues investigated the genome of a hookworm species known as Ancylostoma ceylanicum. Other hookworm species cause more disease among humans, but A. ceylanicum piqued the interest of the researchers because it also infects some species of rodents that are commonly used for research. This means that the researchers can easily study the parasite's entire infection process inside the laboratory.

The team began by sequencing all 313 million nucleotides of the A. ceylanicum genome using the next-generation sequencing capabilities of the Millard and Muriel Jacobs Genetics and Genomics Laboratory at Caltech. In next-generation sequencing, a large amount of DNA—such as a genome—is first reproduced as many very short sequences. Then, computer programs match up common sequences in the short strands to piece them into much longer strands.

"Assembling the short sequences correctly can be a relatively difficult analysis to carry out, but we have experience sequencing worm genomes in this way, so we are quite successful," says Igor Antoshechkin, director of the Jacobs Laboratory. 

Their sequencing results revealed that although the A. ceylanicum genome is only about 10 percent of the size of the human genome, it actually encodes at least 30 percent more genes—about 30,000 in total, compared to approximately 20,000-23,000 in the human genome. However, of these 30,000 genes, the essential genes that are turned on specifically when the parasite is wreaking havoc on its host are the most relevant to the development of potential drugs to fight the worm.

Sternberg and his colleagues wanted to learn more about those active genes, so they looked not to DNA but to RNA—the genetic material that is generated (or transcribed) from the DNA template of active genes and from which proteins are made. Specifically, they examined the RNA generated in an A. ceylanicum worm during infection. Using this RNA, the team found more than 900 genes that are turned on only when the worm infects its host—including 90 genes that belong to a never-before-characterized family of proteins called activation-associated secreted protein related genes, or ASPRs.

"If you go back and look at other parasitic worms, you notice that they have these ASPRs as well," Sternberg says. "So basically we found this new family of proteins that are unique to parasitic worms, and they are related to this early infection process." Since the worm secretes these ASPR proteins early in the infection, the researchers think that these proteins might block the host's initial immune response—preventing the host's blood from clotting and ensuring a free-flowing food source for the blood-sucking parasite.

If ASPRs are necessary for this parasite to invade the host, then a drug that targets and destroys the proteins could one day be used to fight the parasite. Unfortunately, however, it is probably not that simple, Sternberg says.

"If we have 90 of these ASPRs, it might be that a drug would get rid of just a few of them and stop the infection, but maybe you'd have to get rid of all 90 of them for it to work. And that's a problem," he says. "It's going to take a lot more careful study to understand the functions of these ASPRs so we can target the ones that are key regulatory molecules."

Drugs that target ASPRs might one day be used to treat these parasitic infections, but these proteins also hold the potential for anti-A. ceylanicum vaccines—which would prevent these parasites from infecting a host in the first place, Sternberg adds. For example, if a person were injected with an ASPR protein vaccine before travelling to an infection-prone region, their immune system might be more prepared to successfully fend off an infection.

"A parasitic infection is a balance between the parasites trying to suppress the immune system and the host trying to attack the parasite," says Sternberg. "And we hope that by analyzing the genome, we can uncover clues that might help us alter that balance in favor of the host."

These findings were published in a paper titled, "The genome and transcriptome of the zoonotic hookworm Ancylostoma ceylanicum identify infection-specific gene families." In addition to Sternberg and Antoshechkin, other coauthors include Erich M. Schwarz of Cornell University; and Yan Hu, Melanie Miller, and Raffi V. Aroian from UC San Diego. Sternberg's work was funded by the National Institutes of Health and the Howard Hughes Medical Institute.

Frontpage Title: 
Knocking Out Parasites with Their Own Genetic Code
Listing Title: 
Knocking Out Parasites with Their Own Genetic Code
Writer: 
Exclude from News Hub: 
No
Short Title: 
Fighting a Worm with Its Own Genome
News Type: 
Research News

Pages

Subscribe to RSS - research_news