Caltech-Led Team Looks in Detail at the April 2015 Earthquake in Nepal

For more than 20 years, Caltech geologist Jean-Philippe Avouac has collaborated with the Department of Mines and Geology of Nepal to study the Himalayas—the most active, above-water mountain range on Earth—to learn more about the processes that build mountains and trigger earthquakes. Over that period, he and his colleagues have installed a network of GPS stations in Nepal that allows them to monitor the way Earth's crust moves during and in between earthquakes. So when he heard on April 25 that a magnitude 7.8 earthquake had struck near Gorkha, Nepal, not far from Kathmandu, he thought he knew what to expect—utter devastation throughout Kathmandu and a death toll in the hundreds of thousands.

"At first when I saw the news trickling in from Kathmandu, I thought there was a problem of communication, that we weren't hearing the full extent of the damage," says Avouac, Caltech's Earle C. Anthony Professor of Geology. "As it turns out, there was little damage to the regular dwellings, and thankfully, as a result, there were far fewer deaths than I originally anticipated."

Using data from the GPS stations, an accelerometer that measures ground motion in Kathmandu, data from seismological stations around the world, and radar images collected by orbiting satellites, an international team of scientists led by Caltech has pieced together the first complete account of what physically happened during the Gorkha earthquake—a picture that explains how the large earthquake wound up leaving the majority of low-story buildings unscathed while devastating some treasured taller structures.

The findings are described in two papers that now appear online. The first, in the journal Nature Geoscience, is based on an analysis of seismological records collected more than 1,000 kilometers from the epicenter and places the event in the context of what scientists knew of the seismic setting near Gorkha before the earthquake. The second paper, appearing in Science Express, goes into finer detail about the rupture process during the April 25 earthquake and how it shook the ground in Kathmandu.


Build Up and Release of Strain on Himalaya Megathrust (caption and credit in video attached in upper right)

In the first study, the researchers show that the earthquake occurred on the Main Himalayan Thrust (MHT), the main megathrust fault along which northern India is pushing beneath Eurasia at a rate of about two centimeters per year, driving the Himalayas upward. Based on GPS measurements, scientists know that a large portion of this fault is "locked." Large earthquakes typically release stress on such locked faults—as the lower tectonic plate (here, the Indian plate) pulls the upper plate (here, the Eurasian plate) downward, strain builds in these locked sections until the upper plate breaks free, releasing strain and producing an earthquake. There are areas along the fault in western Nepal that are known to be locked and have not experienced a major earthquake since a big one (larger than magnitude 8.5) in 1505. But the Gorkha earthquake ruptured only a small fraction of the locked zone, so there is still the potential for the locked portion to produce a large earthquake.

"The Gorkha earthquake didn't do the job of transferring deformation all the way to the front of the Himalaya," says Avouac. "So the Himalaya could certainly generate larger earthquakes in the future, but we have no idea when."

The epicenter of the April 25 event was located in the Gorkha District of Nepal, 75 kilometers to the west-northwest of Kathmandu, and propagated eastward at a rate of about 2.8 kilometers per second, causing slip in the north-south direction—a progression that the researchers describe as "unzipping" a section of the locked fault.

"With the geological context in Nepal, this is a place where we expect big earthquakes. We also knew, based on GPS measurements of the way the plates have moved over the last two decades, how 'stuck' this particular fault was, so this earthquake was not a surprise," says Jean Paul Ampuero, assistant professor of seismology at Caltech and coauthor on the Nature Geoscience paper. "But with every earthquake there are always surprises."


Propagation of April 2015 Mw 7.8 Gorkha Earthquake (caption and credit in video attached in upper right)

In this case, one of the surprises was that the quake did not rupture all the way to the surface. Records of past earthquakes on the same fault—including a powerful one (possibly as strong as magnitude 8.4) that shook Kathmandu in 1934—indicate that ruptures have previously reached the surface. But Avouac, Ampuero, and their colleagues used satellite Synthetic Aperture Radar data and a technique called back projection that takes advantage of the dense arrays of seismic stations in the United States, Europe, and Australia to track the progression of the earthquake, and found that it was quite contained at depth. The high-frequency waves that were largely produced in the lower section of the rupture occurred at a depth of about 15 kilometers.

"That was good news for Kathmandu," says Ampuero. "If the earthquake had broken all the way to the surface, it could have been much, much worse."

The researchers note, however, that the Gorkha earthquake did increase the stress on the adjacent portion of the fault that remains locked, closer to Kathmandu. It is unclear whether this additional stress will eventually trigger another earthquake or if that portion of the fault will "creep," a process that allows the two plates to move slowly past one another, dissipating stress. The researchers are building computer models and monitoring post-earthquake deformation of the crust to try to determine which scenario is more likely.

Another surprise from the earthquake, one that explains why many of the homes and other buildings in Kathmandu were spared, is described in the Science Express paper. Avouac and his colleagues found that for such a large-magnitude earthquake, high-frequency shaking in Kathmandu was actually relatively mild. And it is high-frequency waves, with short periods of vibration of less than one second, that tend to affect low-story buildings. The Nature Geoscience paper showed that the high-frequency waves that the quake produced came from the deeper edge of the rupture, on the northern end away from Kathmandu.

The GPS records described in the Science Express paper show that within the zone that experienced the greatest amount of slip during the earthquake—a region south of the sources of high-frequency waves and closer to Kathmandu—the onset of slip on the fault was actually very smooth. It took nearly two seconds for the slip rate to reach its maximum value of one meter per second. In general, the more abrupt the onset of slip during an earthquake, the more energetic the radiated high-frequency seismic waves. So the relatively gradual onset of slip in the Gorkha event explains why this patch, which experienced a large amount of slip, did not generate many high-frequency waves.

"It would be good news if the smooth onset of slip, and hence the limited induced shaking, were a systematic property of the Himalayan megathrust fault, or of megathrust faults in general." says Avouac. "Based on observations from this and other megathrust earthquakes, this is a possibility."

In contrast to what they saw with high-frequency waves, the researchers found that the earthquake produced an unexpectedly large amount of low-frequency waves with longer periods of about five seconds. This longer-period shaking was responsible for the collapse of taller structures in Kathmandu, such as the Dharahara Tower, a 60-meter-high tower that survived larger earthquakes in 1833 and 1934 but collapsed completely during the Gorkha quake.

To understand this, consider plucking the strings of a guitar. Each string resonates at a certain natural frequency, or pitch, depending on the length, composition, and tension of the string. Likewise, buildings and other structures have a natural pitch or frequency of shaking at which they resonate; in general, the taller the building, the longer the period at which it resonates. If a strong earthquake causes the ground to shake with a frequency that matches a building's pitch, the shaking will be amplified within the building, and the structure will likely collapse.

Turning to the GPS records from two of Avouac's stations in the Kathmandu Valley, the researchers found that the effect of the low-frequency waves was amplified by the geological context of the Kathmandu basin. The basin is an ancient lakebed that is now filled with relatively soft sediment. For about 40 seconds after the earthquake, seismic waves from the quake were trapped within the basin and continued to reverberate, ringing like a bell with a frequency of five seconds.

"That's just the right frequency to damage tall buildings like the Dharahara Tower because it's close to their natural period," Avouac explains.

In follow-up work, Domniki Asimaki, professor of mechanical and civil engineering at Caltech, is examining the details of the shaking experienced throughout the basin. On a recent trip to Kathmandu, she documented very little damage to low-story buildings throughout much of the city but identified a pattern of intense shaking experienced at the edges of the basin, on hilltops or in the foothills where sediment meets the mountains. This was largely due to the resonance of seismic waves within the basin.

Asimaki notes that Los Angeles is also built atop sedimentary deposits and is surrounded by hills and mountain ranges that would also be prone to this type of increased shaking intensity during a major earthquake.

"In fact," she says, "the buildings in downtown Los Angeles are much taller than those in Kathmandu and therefore resonate with a much lower frequency. So if the same shaking had happened in L.A., a lot of the really tall buildings would have been challenged."

That points to one of the reasons it is important to understand how the land responded to the Gorkha earthquake, Avouac says. "Such studies of the site effects in Nepal provide an important opportunity to validate the codes and methods we use to predict the kind of shaking and damage that would be expected as a result of earthquakes elsewhere, such as in the Los Angeles Basin."

Additional authors on the Nature Geoscience paper, "Lower edge of locked Main Himalayan Thrust unzipped by the 2015 Gorkha earthquake," are Lingsen Meng (PhD '12) of UC Los Angeles, Shengji Wei of Nanyang Technological University in Singapore, and Teng Wang of Southern Methodist University. The lead author on the Science paper, "Slip pulse and resonance of Kathmandu basin during the 2015 Mw 7.8 Gorkha earthquake, Nepal imaged with geodesy" is John Galetzka, formerly an associate staff geodesist at Caltech and now a project manager at UNAVCO in Boulder, Colorado. Caltech research geodesist Joachim Genrich is also a coauthor, as are Susan Owen and Angelyn Moore of JPL. For a full list of authors, please see the paper.

The Nepal Geodetic Array was funded by Caltech, the Gordon and Betty Moore Foundation, and the National Science Foundation. Additional funding for the Science study came from the Department of Foreign International Development (UK), the Royal Society (UK), the United Nations Development Programme, and the Nepal Academy for Science and Technology, as well as NASA and the Department of Foreign International Development.

Writer: 
Kimm Fesenmaier
Home Page Title: 
Details of the April 2015 Earthquake in Nepal
Listing Title: 
Details of the April 2015 Earthquake in Nepal
Writer: 
Exclude from News Hub: 
No
Short Title: 
Details of the April 2015 Earthquake in Nepal
News Type: 
Research News

Caltech Astronomers Unveil a Distant Protogalaxy Connected to the Cosmic Web

A team of astronomers led by Caltech has discovered a giant swirling disk of gas 10 billion light-years away—a galaxy-in-the-making that is actively being fed cool primordial gas tracing back to the Big Bang. Using the Caltech-designed and -built Cosmic Web Imager (CWI) at Palomar Observatory, the researchers were able to image the protogalaxy and found that it is connected to a filament of the intergalactic medium, the cosmic web made of diffuse gas that crisscrosses between galaxies and extends throughout the universe.

The finding provides the strongest observational support yet for what is known as the cold-flow model of galaxy formation. That model holds that in the early universe, relatively cool gas funneled down from the cosmic web directly into galaxies, fueling rapid star formation.

A paper describing the finding and how CWI made it possible currently appears online and will be published in the August 13 print issue of the journal Nature.

"This is the first smoking-gun evidence for how galaxies form," says Christopher Martin, professor of physics at Caltech, principal investigator on CWI, and lead author of the new paper. "Even as simulations and theoretical work have increasingly stressed the importance of cold flows, observational evidence of their role in galaxy formation has been lacking."


Caltech Astronomers Discuss Findings on Galaxy Formation

The protogalactic disk the team has identified is about 400,000 light-years across—about four times larger in diameter than our Milky Way. It is situated in a system dominated by two quasars, the closest of which, UM287, is positioned so that its emission is beamed like a flashlight, helping to illuminate the cosmic web filament feeding gas into the spiraling protogalaxy.

Last year, Sebastiano Cantalupo, then of UC Santa Cruz (now of ETH Zurich) and his colleagues published a paper, also in Nature, announcing the discovery of what they thought was a large filament next to UM287. The feature they observed was brighter than it should have been if indeed it was only a filament. It seemed that there must be something else there.

In September 2014, Martin and his colleagues, including Cantalupo, decided to follow up with observations of the system with CWI. As an integral field spectrograph, CWI allowed the team to collect images around UM287 at hundreds of different wavelengths simultaneously, revealing details of the system's composition, mass distribution, and velocity.

Martin and his colleagues focused on a range of wavelengths around an emission line in the ultraviolet known as the Lyman-alpha line. That line, a fingerprint of atomic hydrogen gas, is commonly used by astronomers as a tracer of primordial matter.

The researchers collected a series of spectral images that combined to form a multiwavelength map of a patch of sky around the two quasars. This data delineated areas where gas is emitting in the Lyman-alpha line, and indicated the velocities with which this gas is moving with respect to the center of the system.

"The images plainly show that there is a rotating disk—you can see that one side is moving closer to us and the other is moving away. And you can also see that there's a filament that extends beyond the disk," Martin says. Their measurements indicate that the disk is rotating at a rate of about 400 kilometers per second, somewhat faster than the Milky Way's own rate of rotation.

"The filament has a more or less constant velocity. It is basically funneling gas into the disk at a fixed rate," says Matt Matuszewski (PhD '12), an instrument scientist in Martin's group and coauthor on the paper. "Once the gas merges with the disk inside the dark-matter halo, it is pulled around by the rotating gas and dark matter in the halo." Dark matter is a form of matter that we cannot see that is believed to make up about 27 percent of the universe. Galaxies are thought to form within extended halos of dark matter.

The new observations and measurements provide the first direct confirmation of the so-called cold-flow model of galaxy formation.

Hotly debated since 2003, that model stands in contrast to the standard, older view of galaxy formation. The standard model said that when dark-matter halos collapse, they pull a great deal of normal matter in the form of gas along with them, heating it to extremely high temperatures. The gas then cools very slowly, providing a steady but slow supply of cold gas that can form stars in growing galaxies.

That model seemed fine until 1996, when Chuck Steidel, Caltech's Lee A. DuBridge Professor of Astronomy, discovered a distant population of galaxies producing stars at a very high rate only two billion years after the Big Bang. The standard model cannot provide the prodigious fuel supply for these rapidly forming galaxies.

The cold-flow model provided a potential solution. Theorists suggested that relatively cool gas, delivered by filaments of the cosmic web, streams directly into protogalaxies. There, it can quickly condense to form stars. Simulations show that as the gas falls in, it contains tremendous amounts of angular momentum, or spin, and forms extended rotating disks.

"That's a direct prediction of the cold-flow model, and this is exactly what we see—an extended disk with lots of angular momentum that we can measure," says Martin.

Phil Hopkins, assistant professor of theoretical astrophysics at Caltech, who was not involved in the study, finds the new discovery "very compelling."

"As a proof that a protogalaxy connected to the cosmic web exists and that we can detect it, this is really exciting," he says. "Of course, now you want to know a million things about what the gas falling into galaxies is actually doing, so I'm sure there is going to be more follow up."

Martin notes that the team has already identified two additional disks that appear to be receiving gas directly from filaments of the cosmic web in the same way.

Additional Caltech authors on the paper, "A giant protogalactic disk linked to the cosmic web," are principal research scientist Patrick Morrissey, research scientist James D. Neill, and instrument scientist Anna Moore from the Caltech Optical Observatories. J. Xavier Prochaska of UC Santa Cruz and former Caltech graduate student Daphne Chang, who is deceased, are also coauthors. The Cosmic Web Imager was funded by grants from the National Science Foundation and Caltech.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Teaser Image: 

"Failed Stars" Host Powerful Auroral Displays

Caltech astronomers say brown dwarfs behave more like planets than stars

Brown dwarfs are relatively cool, dim objects that are difficult to detect and hard to classify. They are too massive to be planets, yet possess some planetlike characteristics; they are too small to sustain hydrogen fusion reactions at their cores, a defining characteristic of stars, yet they have starlike attributes.

By observing a brown dwarf 20 light-years away using both radio and optical telescopes, a team led by Gregg Hallinan, assistant professor of astronomy at Caltech, has found another feature that makes these so-called failed stars more like supersized planets—they host powerful auroras near their magnetic poles.

The findings appear in the July 30 issue of the journal Nature.

"We're finding that brown dwarfs are not like small stars in terms of their magnetic activity; they're like giant planets with hugely powerful auroras," says Hallinan. "If you were able to stand on the surface of the brown dwarf we observed—something you could never do because of its extremely hot temperatures and crushing surface gravity—you would sometimes be treated to a fantastic light show courtesy of auroras hundreds of thousands of times more powerful than any detected in our solar system."

In the early 2000s, astronomers began finding that brown dwarfs emit radio waves. At first, everyone assumed that the brown dwarfs were creating the radio waves in basically the same way that stars do—through the action of an extremely hot atmosphere, or corona, heated by magnetic activity near the object's surface. But brown dwarfs do not generate large flares and charged-particle emissions in the way that our sun and other stars do, so the radio emissions were surprising.

While in graduate school, in 2006, Hallinan discovered that brown dwarfs can actually pulse at radio frequencies. "We see a similar pulsing phenomenon from planets in our solar system," says Hallinan, "and that radio emission is actually due to auroras." Since then he has wondered if the radio emissions seen on brown dwarfs might be caused by auroras.

Auroral displays result when charged particles, carried by the stellar wind for example, manage to enter a planet's magnetosphere, the region where such charged particles are influenced by the planet's magnetic field. Once within the magnetosphere, those particles get accelerated along the planet's magnetic field lines to the planet's poles, where they collide with gas atoms in the atmosphere and produce the bright emissions associated with auroras.

Following his hunch, Hallinan and his colleagues conducted an extensive observation campaign of a brown dwarf called LSRJ 1835+3259, using the National Radio Astronomy Observatory's Very Large Array (VLA), the most powerful radio telescope in the world, as well as optical instruments that included Palomar's Hale Telescope and the W. M. Keck Observatory's telescopes.


This movie shows the brown dwarf, LSRJ 1835+3259, as seen with the National Radio Astronomy Observatory's Very Large Array, pulsing as a result of the process that creates powerful auroras.
Credit: Stephen Bourke/Caltech

Using the VLA they detected a bright pulse of radio waves that appeared as the brown dwarf rotated around. The object rotates every 2.84 hours, so the researchers were able to watch nearly three full rotations over the course of a single night.

Next, the astronomers used the Hale Telescope to observe that the brown dwarf varied optically on the same period as the radio pulses. Focusing on one of the spectral lines associated with excited hydrogen—the h-alpha emission line—they found that the object's brightness varied periodically.

Finally, Hallinan and his colleagues used the Keck telescopes to measure precisely the brightness of the brown dwarf over time—no simple feat given that these objects are many thousands of times fainter than our own sun. Hallinan and his team were able to establish that this hydrogen emission is a signature of auroras near the surface of the brown dwarf.

"As the electrons spiral down toward the atmosphere, they produce radio emissions, and then when they hit the atmosphere, they excite hydrogen in a process that occurs at Earth and other planets, albeit tens of thousands of times more intense," explains Hallinan. "We now know that this kind of auroral behavior is extending all the way from planets up to brown dwarfs."

In the case of brown dwarfs, charged particles cannot be driven into their magnetosphere by a stellar wind, as there is no stellar wind to do so. Hallinan says that some other source, such as an orbiting planet moving through the brown dwarf's magnetosphere, may be generating a current and producing the auroras. "But until we map the aurora accurately, we won't be able to say where it's coming from," he says.

He notes that brown dwarfs offer a convenient stepping stone to studying exoplanets, planets orbiting stars other than our own sun. "For the coolest brown dwarfs we've discovered, their atmosphere is pretty similar to what we would expect for many exoplanets, and you can actually look at a brown dwarf and study its atmosphere without having a star nearby that's a factor of a million times brighter obscuring your observations," says Hallinan.

Just as he has used measurements of radio waves to determine the strength of magnetic fields around brown dwarfs, he hopes to use the low-frequency radio observations of the newly built Owens Valley Long Wavelength Array to measure the magnetic fields of exoplanets. "That could be particularly interesting because whether or not a planet has a magnetic field may be an important factor in habitability," he says. "I'm trying to build a picture of magnetic field strength and topology and the role that magnetic fields play as we go from stars to brown dwarfs and eventually right down into the planetary regime."

The work, "Magnetospherically driven optical and radio aurorae at the end of the main sequence," was supported by funding from the National Science Foundation. Additional authors on the paper include Caltech senior postdoctoral scholar Stephen Bourke, Caltech graduate students Sebastian Pineda and Melodie Kao, Leon Harding of JPL, Stuart Littlefair of the University of Sheffield, Garret Cotter of the University of Oxford, Ray Butler of National University of Ireland, Galway, Aaron Golden of Yeshiva University, Gibor Basri of UC Berkeley, Gerry Doyle of Armagh Observatory, Svetlana Berdyugina of the Kiepenheuer Institute for Solar Physics, Alexey Kuznetsov of the Institute of Solar-Terrestrial Physics in Irkutsk, Russia, Michael Rupen of the National Radio Astronomy Observatory, and Antoaneta Antonova of Sofia University.

 

 

Writer: 
Kimm Fesenmaier
Home Page Title: 
Powerful Auroras Shed Light On Brown Dwarfs
Writer: 
Exclude from News Hub: 
No
Short Title: 
Powerful Auroras Shed Light On Brown Dwarfs
News Type: 
Research News

Mosquitoes Use Smell to See Their Hosts

On summer evenings, we try our best to avoid mosquito bites by dousing our skin with bug repellents and lighting citronella candles. These efforts may keep the mosquitoes at bay for a while, but no solution is perfect because the pests have evolved to use a triple threat of visual, olfactory, and thermal cues to home in on their human targets, a new Caltech study suggests.

The study, published by researchers in the laboratory of Michael Dickinson, the Esther M. and Abe M. Zarem Professor of Bioengineering, appears in the July 17 online version of the journal Current Biology.

When an adult female mosquito needs a blood meal to feed her young, she searches for a host—often a human. Many insects, mosquitoes included, are attracted by the odor of the carbon dioxide (CO2) gas that humans and other animals naturally exhale. However, mosquitoes can also pick up other cues that signal a human is nearby. They use their vision to spot a host and thermal sensory information to detect body heat.

But how do the mosquitoes combine this information to map out the path to their next meal?

To find out how and when the mosquitoes use each type of sensory information, the researchers released hungry, mated female mosquitoes into a wind tunnel in which different sensory cues could be independently controlled. In one set of experiments, a high-concentration CO2 plume was injected into the tunnel, mimicking the signal created by the breath of a human. In control experiments, the researchers introduced a plume consisting of background air with a low concentration of CO2. For each experiment, researchers released 20 mosquitoes into the wind tunnel and used video cameras and 3-D tracking software to follow their paths.

When a concentrated CO2 plume was present, the mosquitos followed it within the tunnel as expected, whereas they showed no interest in a control plume consisting of background air.

"In a previous experiment with fruit flies, we found that exposure to an attractive odor led the animals to be more attracted to visual features," says Floris van Breugel, a postdoctoral scholar in Dickinson's lab and first author of the study. "This was a new finding for flies, and we suspected that mosquitoes would exhibit a similar behavior. That is, we predicted that when the mosquitoes were exposed to CO2, which is an indicator of a nearby host, they would also spend a lot of time hovering near high-contrast objects, such as a black object on a neutral background."

To test this hypothesis, van Breugel and his colleagues did the same CO2 plume experiment, but this time they provided a dark object on the floor of the wind tunnel. They found that in the presence of the carbon dioxide plumes, the mosquitoes were attracted to the dark high-contrast object. In the wind tunnel with no CO2 plume, the insects ignored the dark object entirely.

While it was no surprise to see the mosquitoes tracking a CO2 plume, "the new part that we found is that the CO2 plume increases the likelihood that they'll fly toward an object. This is particularly interesting because there's no CO2 down near that object—it's about 10 centimeters away," van Breugel says. "That means that they smell the CO2, then they leave the plume, and several seconds later they continue flying toward this little object. So you could think of it as a type of memory or lasting effect."

Next, the researchers wanted to see how a mosquito factors thermal information into its flight path. It is difficult to test, van Breugel says. "Obviously, we know that if you have an object in the presence of a CO2 plume—warm or cold—they will fly toward it because they see it," he says. "So we had to find a way to separate the visual attraction from the thermal attraction."

To do this, the researchers constructed two glass objects that were coated with a clear chemical substance that made it possible to heat them to any desired temperature. They heated one object to 37 degrees Celsius (approximately human body temperature) and allowed one to remain at room temperature, and then placed them on the floor of the wind tunnel with and without CO2 plumes, and observed mosquito behavior. They found that mosquitoes showed a preference for the warm object. But contrary to the mosquitoes' visual attraction to objects, the preference for warmth was not dependent on the presence of CO2.

"These experiments show that the attraction to a visual feature and the attraction to a warm object are separate. They are independent, and they don't have to happen in order, but they do often happen in this particular order because of the spatial arrangement of the stimuli: a mosquito can see a visual feature from much further away, so that happens first. Only when the mosquito gets closer does it detect an object's thermal signature," van Breugel says.

Information gathered from all of these experiments enabled the researchers to create a model of how the mosquito finds its host over different distances. They hypothesize that from 10 to 50 meters away, a mosquito smells a host's CO2 plume. As it flies closer—to within 5 to 15 meters—it begins to see the host. Then, guided by visual cues that draw it even closer, the mosquito can sense the host's body heat. This occurs at a distance of less than a meter.

"Understanding how brains combine information from different senses to make appropriate decisions is one of the central challenges in neuroscience," says Dickinson, the principal investigator of the study. "Our experiments suggest that female mosquitoes do this in a rather elegant way when searching for food. They only pay attention to visual features after they detect an odor that indicates the presence of a host nearby. This helps ensure that they don't waste their time investigating false targets like rocks and vegetation. Our next challenge is to uncover the circuits in the brain that allow an odor to so profoundly change the way they respond to a visual image."

The work provides researchers with exciting new information about insect behavior and may even help companies design better mosquito traps in the future. But it also paints a bleak picture for those hoping to avoid mosquito bites.

"Even if it were possible to hold one's breath indefinitely," the authors note toward the end of the paper, "another human breathing nearby, or several meters upwind, would create a CO2 plume that could lead mosquitoes close enough to you that they may lock on to your visual signature. The strongest defense is therefore to become invisible, or at least visually camouflaged. Even in this case, however, mosquitoes could still locate you by tracking the heat signature of your body . . . The independent and iterative nature of the sensory-motor reflexes renders mosquitoes' host seeking strategy annoyingly robust."

These results were published in a paper titled "Mosquitoes use vision to associate odor plumes with thermal targets." In addition to Dickinson and van Breugel, the other authors are Jeff Riffell and Adrienne Fairhall from the University of Washington. The work was funded by a grant from the National Institutes of Health.

Home Page Title: 
Mosquitoes Use Smell to See Their Hosts
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 

Alone in the Darkness: Mariner 4 to Mars, 50 Years Later

July 14 marks 50 years of visual reconnaissance of the solar system by NASA's Jet Propulsion Laboratory (JPL), beginning with Mariner 4's flyby of Mars in 1965.

Among JPL's first planetary efforts, Mariners 3 and 4 (known collectively as "Mariner Mars") were planned and executed by a group of pioneering scientists at Caltech in partnership with JPL. NASA was only 4 years old when the first Mars flyby was approved in 1962, but the core science team had been working together at Caltech for many years. The team included Caltech faculty Robert Sharp (after whom Mount Sharp, the main target of the Mars rover Curiosity, is named) and Gerry Neugebauer, professor of geology and of professor of physics, respectively; Robert Leighton and H. Victor Neher, professors of physics; and Bill Pickering, professor of electrical engineering, who was the director of JPL from 1954–1976. Rounding out the Caltech contingent was a young Bruce Murray, a new addition to the geology faculty, who would follow Pickering as JPL director in 1976.

"The Mariner missions marked the beginning of planetary geology, led by researchers at Caltech including Bruce Murray and Robert Sharp," said John Grotzinger, the Fletcher Jones Professor of Geology and chair of the Division of Geological and Planetary Sciences. "These early flyby missions showed the enormous potential of Mars to provide insight into the evolution of a close cousin to Earth and stimulated the creation of a program dedicated to iterative exploration involving orbiters, landers, and rovers."

By today's standards, Mariner Mars was a virtual leap into the unknown. NASA and JPL had little spaceflight experience to guide them. There had been just one successful planetary mission—Mariner 2's journey past Venus in 1962—to build upon. Sending spacecraft to other planets was still a new endeavor.  

The Mariner Mars spacecraft were originally designed without cameras. Neugebauer, Murray, and Leighton felt that a lot of science questions could be answered via images from this close encounter with Mars. As it turned out, sending back photos of the planet that had so long captured the imaginations of millions had the added benefit of making the Mars flyby more accessible to the public.

Mariner 3 launched on November 5, 1964. The Atlas rocket that boosted it clear of the atmosphere functioned perfectly (not always the case in the early years of spaceflight), but the shroud enclosing the payload failed to fully open and the spacecraft, unable to collect sunlight on its solar panels, ceased to function after about nine hours of flight.

Mariner 4 launched three weeks later on November 28 with a redesigned shroud. The probe deployed as planned and began its journey to Mars. But there was still drama in store for the mission. Within the first hour of the flight, the rocket's upper stage had pushed the spacecraft out of Earth orbit, and the solar panels had deployed. Then the guidance system acquired a lock on the sun, but a second object was needed to guide the spacecraft. This depended on a photocell finding the bright star Canopus, which was attempted about 15 hours later. During these first attempts, however, the primitive onboard electronics erroneously identified other stars of similar brightness.

Controllers managed to solve this problem but over the next few weeks realized that a small cloud of dust and paint flecks, ejected when Mariner 4 deployed, was traveling along with the spacecraft and interfering with the tracking of Canopus. A tiny paint chip, if close enough to the star tracker, could mimic the star. After more corrective action, Canopus was reacquired and Mariner's journey continued largely without incident. This star-tracking technology, along with many other design features of the spacecraft, has been used in every interplanetary mission JPL has flown since.

At the time, what was known about Mars had been learned from Earth-based telescopes. The images were fuzzy and indistinct—at its closest, Mars is still about 35 million miles distant. Scientific measurements derived from visual observations of the planet were inexact. While ideas about the true nature of Mars evolved throughout the first half of the 20th century, in 1965 nobody could say with any confidence how dense the martian atmosphere was or determine its exact composition. Telescopic surveys had recorded a visual event called the "wave of darkening," which some scientists theorized could be plant life blooming and perishing as the harsh martian seasons changed. A few of them still thought of Mars as a place capable of supporting advanced life, although most thought it unlikely. However, there was no conclusive evidence for either scenario.

So, as Mariner 4 flew past Mars, much was at stake, both for the scientific community and a curious general public. Were there canals or channels on the surface, as some astronomers had reported? Would we find advanced life forms or vast collections of plant life? Would there be liquid water on the surface?

Just over seven months after launch, the encounter with Mars was imminent. On July 14, 1965, Mariner's science instruments were activated. These included a magnetometer to measure magnetic fields, a Geiger counter to measure radiation, a cosmic ray telescope, a cosmic dust detector, and the television camera.

About seven hours before the encounter, the TV camera began acquiring images. After the probe passed Mars, an onboard data recorder—which used a 330-foot endless loop of magnetic tape to store still pictures—initiated playback of the raw images to Earth, transmitting them twice for certainty. Each image took 10 hours to transmit.

The 22 images sent by Mariner 4 appeared conclusive. Although they were low-resolution and black-and-white, they indicated that Mars was not a place likely to be friendly to life. It was a cold, dry desert, covered with so many craters as to strongly resemble Earth's moon. The atmospheric density was about one-thousandth that of Earth, and no liquid water was apparent on the surface.

When discussing the mission during an interview at Caltech in 1977, Leighton recalled viewing the first images at JPL. "If someone had asked 'What do you expect to see?' we would have said 'craters'…[yet] the fact that craters were there, and a predominant land form, was somehow surprising."

Leighton also recalled a letter he received from, of all people, a dairy farmer. It read, "I'm not very close to your world, but I really appreciate what you are doing. Keep it going." Leighton said of the sentiment, "A letter from a milkman…I thought that was kind of nice."

After its voyage past Mars, Mariner 4 maintained intermittent communication with JPL and returned data about the interplanetary environment for two more years. But by the end of 1967, the spacecraft had suffered tens of thousands of micrometeoroid impacts and was out of the nitrogen gas it used for maneuvering. The mission officially ended on December 21.

"Mariner 4 defined and pioneered the systems and technologies needed for a truly interplanetary spacecraft," says Rob Manning (BS '81), JPL's chief engineer for the Low-Density Supersonic Decelerator and formerly chief engineer for the Mars Science Laboratory. "All U.S. interplanetary missions that have followed were directly derived from the architecture and innovations that engineers behind Mariner invented. We stand on the shoulders of giants."

Home Page Title: 
Alone in the Darkness: Mariner 4 to Mars, 50 Years Later
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Teaser Image: 
Exclude from Home Page: 

Distant Black Hole Wave Twists Like Giant Whip

Fast-moving magnetic waves emanating from a distant supermassive black hole undulate like a whip whose handle is being shaken by a giant hand, according to a new study involving Caltech scientists, which used data from the National Radio Astronomy Observatory's Very Long Baseline Array (VLBA) to explore the galaxy-black hole system known as BL Lacertae (BL Lac) in high resolution.

The team's findings, detailed in the April 10 issue of the Astrophysical Journal, mark the first time so-called Alfvén (pronounced Alf-vain) waves have been identified in a black hole system.

Alfvén waves are generated when magnetic field lines, such as those coming from the sun or the disk around a black hole, interact with charged particles, or ions, and become twisted, and in the case of BL Lac and sometimes for the sun, are coiled into a helix. In the case of BL Lac, the ions are in the form of particle jets that are flung from opposite sides of the black hole at near light speed.

"Imagine running a water hose through a slinky that has been stretched taut," says first author Marshall Cohen, professor emeritus of astronomy at Caltech. "A sideways disturbance at one end of the slinky will create a wave that travels to the other end, and if the slinky sways to and fro, the hose running through its center has no choice but to move with it."

A similar thing is happening in BL Lac, Cohen says. The Alfvén waves are analogous to the propagating transverse motions of the slinky, and as the waves propagate along the magnetic field lines, they can cause the field lines—and the particle jets encompassed by the field lines—to move as well.

It's common for black hole particle jets to bend—and some even swing back and forth. But those movements typically take place on timescales of thousands or millions of years. "What we see is happening on a timescale of weeks," Cohen says. "We're taking pictures once a month, and the position of the waves is different each month."

Interestingly, from the vantage of astronomers on Earth, the Alfvén waves emanating from BL Lac appear to be traveling about five times faster than the speed of light. "The waves only appear to be superluminal, or moving faster than light," Cohen says. "The high speed is an optical illusion resulting from the fact that the waves are traveling very close to, but below, the speed of light, and are passing just to the side of our line of sight."

Co-author David Meier, a visiting associate in astronomy and now-retired astrophysicist from JPL, added, "By analyzing these waves, we are able to determine the internal properties of the jet, and this will help us ultimately understand how jets are produced by black holes."

Other authors on the paper, "Studies of the Jet in BL Lacertae II Superluminal Alfvén Waves," include Talvikki Hovatta, a former Caltech postdoctoral scholar; as well as scientists from the University of Cologne and the Max Planck Institute for Radio Astronomy in Germany; the Isaac Newton Institute of Chile; Aalto University in Finland; the Astro Space Center of Lebedev Physical Institute, the Pulkovo Observatory, and the Crimean Astrophysical Observatory in Russia. Purdue University, Denison University, and the Jet Propulsion Laboratory were also involved in the study.

Home Page Title: 
Distant Black Hole Wave Twists Like Giant Whip
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Teaser Image: 
Exclude from Home Page: 

JPL News: Searing Sun Seen in X-rays

X-rays light up the surface of our sun in a bouquet of colors in this new image containing data from NASA's Nuclear Spectroscopic Telescope Array, or NuSTAR. The high-energy X-rays seen by NuSTAR are shown in blue, while green represents lower-energy X-rays from the X-ray Telescope instrument on the Hinode spacecraft, named after the Japanese word for sunrise. The yellow and green colors show ultraviolet light from NASA's Solar Dynamics Observatory.

NuSTAR usually spends its time investigating the mysteries of black holes, supernovae, and other high-energy objects in space. But it can also look closer to home to study our sun.

"What's great about NuSTAR is that the telescope is so versatile that we can hunt black holes millions of light-years away and we can also learn something fundamental about the star in our own backyard," said Brian Grefenstette, a Caltech research scientist and an astronomer on the NuSTAR team.

NuSTAR is a Small Explorer mission led by Caltech and managed by NASA's Jet Propulsion Laboratory in Pasadena, California, for NASA's Science Mission Directorate in Washington. JPL is managed by Caltech for NASA.

Read the full story from JPL News

Images: 
Home Page Title: 
JPL News: Searing Sun Seen in X-rays
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
Exclude from Home Page: 

Sniffing Out Answers: A Conversation with Markus Meister

Blindfolded and asked to distinguish between a rose and, say, smoke from a burning candle, most people would find the task easy. Even differentiating between two rose varieties can be a snap because the human olfactory system—made up of the nerve cells in our noses and everything that allows the brain to process smell—is quite adept. But just how sensitive is it to different smells?

In 2014, a team of scientists from the Rockefeller University published a paper in the journal Science, arguing that humans can discriminate at least 1 trillion odors. Now Markus Meister, the Anne P. and Benjamin F. Biaggini Professor of Biological Sciences at Caltech, has published a paper in the open-access journal eLife, in which he disputes the 2014 claim, saying that the science is not yet in a place where such a number can be determined.

We recently spoke with Meister about his new paper and what it says about the claim that we can distinguish a trillion smells.

 

What was the goal of the 2014 paper, and why do you take issue with it?

The overt question the authors asked was: How many different smells can humans distinguish? That is a naturally interesting question, in part because in other fields of sensory biology, similar questions have already been answered. People quibble about the exact numbers, but in general scientists agree that humans can distinguish about 1 to 2 million colors and something on the order of 100,000 pure tones.

But as interesting as the question is, I argue that we, as a field, are not yet prepared to address it. First we need to know how many dimensions span the perceptual space of odors. And by that I mean: how many olfactory variables are needed to fully describe all of the odors that humans can experience?

In the case of human vision, we say that the perceptual space for colors has three dimensions, which means that every physical light can be described by three numbers—how it activates the red, green, and blue cone photoreceptors in the retina.

As long as we don't know the dimensionality of odor space, we don't know how to even start interpreting measurements. Once we know the dimensionality, we can start probing the space systematically and ask how many different odors fit into it in the same way that we've looked at how many different colors fit into the three-dimensional space of colors.

The fundamental conceptual mistake that the authors of the Science paper made was to assume that the space of odor perception has 128 dimensions or more and then interpret the data as though that was the case . . . even though there is absolutely no evidence to suggest that the odor space has such high dimensionality.

 

What makes it so hard to determine the dimensionality of odor?

Well, there are a couple of things. First, there is no natural coordinate system in which olfactory stimuli exist. This stands in contrast with visual and auditory stimuli. For example, pure (monochromatic) lights or tones can be represented nicely as sinusoidal waves with just two variables, the frequency and the amplitude of the wave. We can easily control those two variables, and they correspond nicely to things we perceive. For pure tones, the amplitude of the sine wave corresponds to loudness and the frequency corresponds to perceived pitch. For a pure light, the frequency determines your perception of the color; if you change the intensity of the light, that alters your perception of the brightness. These simple physical parameters of the stimulus allow us to explore those spaces more easily.

In the case of odors, there are probably several hundred thousand substances that have a smell that can be perceived. But they all have different structures. There is no intuitive way to organize the stimuli. There has been some recent progress in this area, but in general we have not been successful in isolating a few physical variables that can account for a lot of what we smell.

Another aspect of olfaction that has complicated people's thinking is that humans have about 400 types of primary smell receptors. These are the actual neurons in the lining of the nasal cavity that detect odorants. So at the very input to the nervous system, every smell is characterized by the action it has on those 400 different sensors. Based on that, you might assume that smell lives in a much larger space than color vision—one with as many as 400 dimensions.

But can we perceive all of those 400 dimensions? Just because two odors cause a different pattern of activation of nerve cells in the nose doesn't mean you can actually tell them apart. Think about our sense of touch. Every one of our hairs has at its root several mechanoreceptors. If you run a comb through the hair on your head, you activate a hundred thousand mechanoreceptors in a particular pattern. If you repeat the action, you activate a different pattern of receptors, but you will be unable to perceive a difference. Similarly, I argue, there's no reason to think that we can perceive a difference between all the different patterns of activation of nerve cells in the nasal cavity. So the number of dimensions could, in fact, be much lower than 400. In fact, some recent studies have suggested that odor lives in a space with 10 or fewer perceptual dimensions.

 

In your work you describe a couple of basic experimental design failures of the 2014 paper. Can you walk us through those?

Basically, two scientific errors were made in the original study. They have to do with the concept of a positive-control experiment and the concept of testing alternative hypotheses.

In science, when we come up with a new way of analyzing things, we need to perform a test—called a positive control—that gives us confidence that the new analysis can find the right answer in a case where we already know what the answer is. So, for example, if you have devised a new way of weighing things, you will want to test it by weighing something whose weight you already know very well based on some accepted procedure. If the new procedure gives a different answer, we say it failed the positive control.

The 2014 paper did not include a positive-control test. In my paper, I provide two; applying the system that the authors propose to a very simple model microbe and to the human color-vision system. In both cases, the answers come out wrong by huge factors.

The other failure of the 2014 paper is a failure to consider alternate hypotheses. When scientists interpret the outcome of an experiment, we need to seriously analyze alternate hypotheses to the ones we believe are most likely and show why they are not reasonable explanations for what we are seeing.

In my paper, I show that an alternate model that is clearly absurd—that humans can only discriminate 10 odors—explains the data just as well as the very complicated explanation that the authors propose, which involves 400 dimensions and 1 trillion odor percepts. What this really means is that the experiment was poorly designed, in the sense that it didn't constrain the answer to the question.

By the way, there is an accompanying paper by Gerkin and Castro in the same issue of eLife that critiques the experimental design from an entirely different angle, regarding the use of statistics. I found this article very instructive, and have used it already in teaching.

 

How do you suggest scientists go about determining the dimensionality of the odor space?

One concrete idea is to try to figure out what the number of dimensions is in the vicinity of a particular point in that space. If you did that with color, you would arrive at the number three from the vast majority of points. So I suggest we start at some arbitrary point in odor space—say a 50 percent mixture of 30 different odors—and systematically go in each of the directions from there and ask: can humans actually distinguish the odor when you change the concentration a little bit up or down from there? If you do that in 30 different dimensions you might find that maybe only five of those dimensions contribute to changing the perceived odor and that along the other dimensions there is very little change. So let's figure out the dimensionality that comes out of a study like that. Is it two? Probably not. I would guess for something like 10 or 20.

Once we know that, we can start to ask how many odors fit into that space.

 

Why does all of this matter? Why do we need to know how many odors we can smell?

The question of how many smells we can discriminate has fascinated people for at least a century, and the whole industry of flavors and fragrances has been very interested in finding out whether there is a systematic set of rules by which one could mix together some small number of primary odors in order to produce any target smell.

In the field of color vision, that problem has been solved. As a result, we all use color monitors that only have three types of lights—red, green, and blue. And yet by mixing them together, they can make just about every color impression that you might care about. So there's a real technological incentive to figuring out how you can mix together primary stimuli to make any kind of perceived smell.

 

What is the big lesson you would like people to take away from this scientific exchange?

One lesson I try to convey to my students is the value of a simple simulation—to ask, "Could this idea work even in principle? Let's try it in the simplest case we can imagine." That sort of triage can often keep you from walking down an unproductive path.

On a more general note, people should remain skeptical of spectacular claims. This is particularly important when we referee for the high-glamour journals, where the editors have a predilection for unexpected results. As a community we should let things simmer a bit before allowing a spectacular claim to become the conventional wisdom. Maybe we all need to stop and smell the roses.

Writer: 
Kimm Fesenmaier
Listing Title: 
Sniffing Out Answers
Writer: 
Exclude from News Hub: 
No
Short Title: 
Sniffing Out Answers
News Type: 
Research News
Teaser Image: 

Better Memory with Faster Lasers

DVDs and Blu-ray disks contain so-called phase-change materials that morph from one atomic state to another after being struck with pulses of laser light, with data "recorded" in those two atomic states. Using ultrafast laser pulses that speed up the data recording process, Caltech researchers adopted a novel technique, ultrafast electron crystallography (UEC), to visualize directly in four dimensions the changing atomic configurations of the materials undergoing the phase changes. In doing so, they discovered a previously unknown intermediate atomic state—one that may represent an unavoidable limit to data recording speeds.

By shedding light on the fundamental physical processes involved in data storage, the work may lead to better, faster computer memory systems with larger storage capacity. The research, done in the laboratory of Ahmed Zewail, Linus Pauling Professor of Chemistry and professor of physics, will be published in the July 28 print issue of the journal ACS Nano.

When the laser light interacts with a phase-change material, its atomic structure changes from an ordered crystalline arrangement to a more disordered, or amorphous, configuration. These two states represent 0s and 1s of digital data.

"Today, nanosecond lasers—lasers that pulse light at one-billionth of a second—are used to record information on DVDs and Blu-ray disks, by driving the material from one state to another," explains Giovanni Vanacore, a postdoctoral scholar and an author on the study. The speed with which data can be recorded is determined both by the speed of the laser—that is, by the duration of each "pulse" of light—and by how fast the material itself can shift from one state to the other.

Thus, with a nanosecond laser, "the fastest you can record information is one information unit, one 0 or 1, every nanosecond," says Jianbo Hu, a postdoctoral scholar and the first author of the paper. "To go even faster, people have started to use femtosecond lasers, which can potentially record one unit every one millionth of a billionth of a second. We wanted to know what actually happens to the material at this speed and if there is a limit to how fast you can go from one structural phase to another."

To study this, the researchers used their technique, ultrafast electron crystallography. The technique, a new development—different from Zewail's Nobel Prize–winning work in femtochemistry, the visual study of chemical processes occurring at femtosecond scales—allowed researchers to observe directly the transitioning atomic configuration of a prototypical phase-change material, germanium telluride (GeTe), when it is hit by a femtosecond laser pulse.

In UEC, a sample of crystalline GeTe is bombarded with a femtosecond laser pulse, followed by a pulse of electrons. The laser pulse causes the atomic structure to change from the crystalline to other structures, and then ultimately to the amorphous state. Then, when the electron pulse hits the sample, its electrons scatter in a pattern that provides a picture of the sample's atomic configuration as a function of the time.

With this technique, the researchers could see directly, for the first time, the structural shift in GeTe caused by the laser pulses. However, they also saw something more: a previously unknown intermediate phase that appears during the transition from the crystalline to the amorphous configuration. Because moving through the intermediate phase takes additional time, the researchers believe that it represents a physical limit to how quickly the overall transition can occur—and to how fast data can be recorded, regardless of the laser speeds used.

"Even if there is a laser faster than a femtosecond laser, there will be a limit as to how fast this transition can occur and information can be recorded, just because of the physics of these phase-change materials," Vanacore says. "It's something that cannot be solved technologically—it's fundamental."

Despite revealing such limits, the research could one day aid the development of better data storage for computers, the researchers say. Right now, computers generally store information in several ways, among them the well-known random-access memory (RAM) and read-only memory (ROM). RAM, which is used to run the programs on your computer, can record and rewrite information very quickly via an electrical current. However, the information is lost whenever the computer is powered down. ROM storage, including CDs and DVDs, uses phase-change materials and lasers to store information. Although ROM records and reads data more slowly, the information can be stored for decades.

Finding ways to speed up the recording process of phase-change materials and understanding the limits to this speed could lead to a new type of memory that harnesses the best of both worlds.

The researchers say that their next step will be to use UEC to study the transition of the amorphous atomic structure of GeTe back into the crystalline phase—comparable to the phenomenon that occurs when you erase and then rewrite a DVD.

Although these applications could mean exciting changes for future computer technologies, this work is also very important from a fundamental point of view, Zewail says.

"Understanding the fundamental behavior of materials transformation is what we are after, and these new techniques developed at Caltech have made it possible to visualize such behavior in both space and time," Zewail says.

The work is published in a paper titled "Transient Structures and Possible Limits of Data Recording in Phase-Change Materials." In addition to Hu, Vanacore, and Zewail, Xiangshui Miao and Zhe Yang are also coauthors on the paper. The work was supported by the National Science Foundation and the Air Force Office of Scientific Research and was carried out in Caltech's Center for Physical Biology, which is funded by the Gordon and Betty Moore Foundation.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

New Approach Holds Promise for Earlier, Easier Detection of Colorectal Cancer

Caltech chemists develop a technique that could one day lead to early detection of tumors

Chemists at Caltech have developed a new sensitive technique capable of detecting colorectal cancer in tissue samples—a method that could one day be used in clinical settings for the early diagnosis of colorectal cancer.

Colorectal cancer is the third most prevalent cancer worldwide and is estimated to cause about 700,000 deaths every year. Metastasis due to late detection is one of the major causes of mortality from this disease; therefore, a sensitive and early indicator could be a critical tool for physicians and patients.

A paper describing the new detection technique currently appears online in Chemistry & Biology and will be published in the July 23 issue of the journal's print edition. Caltech graduate student Ariel Furst (PhD '15) and her adviser, Jacqueline K. Barton, the Arthur and Marian Hanisch Memorial Professor of Chemistry, are the paper's authors.

"Currently, the average biopsy size required for a colorectal biopsy is about 300 milligrams," says Furst. "With our experimental setup, we require only about 500 micrograms of tissue, which could be taken with a syringe biopsy versus a punch biopsy. So it would be much less invasive." One microgram is one thousandth of a milligram.

The researchers zeroed in on the activity of a protein called DNMT1 as a possible indicator of a cancerous transformation. DNMT1 is a methyltransferase, an enzyme responsible for DNA methylation—the addition of a methyl group to one of DNA's bases. This essential and normal process is a genetic editing technique that primarily turns genes off but that has also recently been identified as an early indicator of cancer, especially the development of tumors, if the process goes awry.

When all is working well, DNMT1 maintains the normal methylation pattern set in the embryonic stages, copying that pattern from the parent DNA strand to the daughter strand. But sometimes DNMT1 goes haywire, and methylation goes into overdrive, causing what is called hypermethylation. Hypermethylation can lead to the repression of genes that typically do beneficial things, like suppress the growth of tumors or express proteins that repair damaged DNA, and that, in turn, can lead to cancer.

Building on previous work in Barton's group, Furst and Barton devised an electrochemical platform to measure the activity of DNMT1 in crude tissue samples—those that contain all of the material from a tissue, not just DNA or RNA, for example. Fundamentally, the design of this platform is based on the concept of DNA-mediated charge transport—the idea that DNA can behave like a wire, allowing electrons to flow through it and that the conductivity of that DNA wire is extremely sensitive to mistakes in the DNA itself. Barton earned the 2010 National Medal of Science for her work establishing this field of research and has demonstrated that it can be used not only to locate DNA mutations but also to detect the presence of proteins such as DNMT1 that bind to DNA.

In the present study, Furst and Barton started with two arrays of gold electrodes—one atop the other—embedded in Teflon blocks and separated by a thin spacer that formed a well for solution. They attached strands of DNA to the lower electrodes, then added the broken-down contents of a tissue sample to the solution well. After allowing time for any DNMT1 in the tissue sample to methylate the DNA, they added a restriction enzyme that severed the DNA if no methylation had occurred—i.e., if DNMT1 was inactive. When they applied a current to the lower electrodes, the samples with DNMT1 activity passed the current clear through to the upper electrodes, where the activity could be measured. 

"No methylation means cutting, which means the signal turns off," explains Furst. "If the DNMT1 is active, the signal remains on. So we call this a signal-on assay for methylation activity. But beyond on or off, it also allows us to measure the amount of activity." This assay for DNMT1 activity was first developed in Barton's group by Natalie Muren (PhD '13).

Using the new setup, the researchers measured DNMT1 activity in 10 pairs of human tissue samples, each composed of a colorectal tumor sample and an adjacent healthy tissue from the same patient. When they compared the samples within each pair, they consistently found significantly higher DNMT1 activity, hypermethylation, in the tumorous tissue. Notably, they found little correlation between the amount of DNMT1 in the samples and the presence of cancer—the correlation was with activity.

"The assay provides a reliable and sensitive measure of hypermethylation," says Barton, also the chair of the Division of Chemistry and Chemical Engineering.  "It looks like hypermethylation is good indicator of tumorigenesis, so this technique could provide a useful route to early detection of cancer when hypermethylation is involved."

Looking to the future, Barton's group hopes to use the same general approach in devising assays for other DNA-binding proteins and possibly using the sensitivity of their electrochemical devices to measure protein activities in single cells. Such a platform might even open up the possibility of inexpensive, portable tests that could be used in the home to catch colorectal cancer in its earliest, most treatable stages.

The work described in the paper, "DNA Electrochemistry shows DNMT1 Methyltransferase Hyperactivity in Colorectal Tumors," was supported by the National Institutes of Health. 

Writer: 
Kimm Fesenmaier
Home Page Title: 
A New Approach to Detecting Colorectal Cancer
Listing Title: 
A New Approach to Detecting Colorectal Cancer
Writer: 
Exclude from News Hub: 
No
Short Title: 
A New Approach to Detecting Colorectal Cancer
News Type: 
Research News

Pages