Research Update: Wordy Worms and Their Eavesdropping Predators

For over 25 years, Paul Sternberg has been studying worms—how they develop, why they sleep, and, more recently, how they communicate. Now, he has flipped the script a bit by taking a closer look at how predatory fungi may be tapping into worm conversations to gain clues about their whereabouts.

Nematodes, Sternberg's primary worm interest, are found in nearly every corner of the world and are one of the most abundant animals on the planet. Unsurprisingly, they have natural enemies, including numerous types of carnivorous fungi that build traps to catch their prey. Curious to see how nematophagous fungi might sense that a meal is present without the sensory organs—like eyes or noses—that most predators use, Sternberg and Yen-Ping Hsueh, a postdoctoral scholar in biology at Caltech, started with a familiar tool: ascarosides. These are the chemical cues that nematodes use to "talk" to one another.

"If we think about it from an evolutionary perspective, whatever the worms are making that can be sensed by the nematophagous fungi must be very important to the worm—otherwise, it's not worth the risk," explains Hsueh. "I thought that ascarosides perfectly fit this hypothesis."

In order to test their idea, the team first evaluated whether different ascarosides caused one of the most common nematode-trapping fungi species to start making a trap. Indeed, it responded by building sticky, web-like nets called adhesive networks, but only when it was nutrient-deprived. It takes a lot of energy for the fungi to build a trap, so they'll only do it if they are hungry and they sense that prey is nearby. Moreover, this ascaroside-induced response is conserved in three other closely related species. But, the researchers say, each of the four fungal species responded to different sets of ascarosides.

"This fits with the idea that different types of predators might encounter different types of prey in nature, and also raises the possibility that fungi could 'read' the different dialects of each worm type," says Sternberg. "What's cool is that we've shown the ability for a predator to eavesdrop on essential prey communication. The worms have to talk to each other using these chemicals, and the predator is listening in on it—that's how it knows the worms are there."

Sternberg and Hsueh also tested a second type of fungus that uses a constricting ring to trap the worms, but it did not respond to the ascarosides. However, the team says that because they only tested a handful of the chemical cues, it's possible that they simply did not test the right ones for that type of fungus.

"Next, the focus is to really study the molecular mechanism in the fungi—how does a fungus sense the ascarosides, and what are the downstream pathways that induce the trap formation," says Hsueh. "We are also interested in evolutionary question of why we see this ascaroside sensing in some types of fungi but not others."  

In the long run, their findings may help improve methods for pest management. Some of these fungi are used for biocontrol to try and keep nematodes away from certain plant roots. Knowing more about what stimulates the organisms to make traps might allow for the development of better biocontrol preparations, says Sternberg.

The full results of Sternberg and Hsueh's study can be found in the paper, "Nematode-trapping fungi eavesdrop on nematode pheromones," published in the journal Current Biology

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech-Led Astronomers Discover Galaxies Near Cosmic Dawn

Researchers conduct first census of the most primitive and distant galaxies seen

PASADENA, Calif.—A team of astronomers led by the California Institute of Technology (Caltech) has used NASA's Hubble Space Telescope to discover seven of the most primitive and distant galaxies ever seen.

One of the galaxies, the astronomers say, might be the all-time record holder—the galaxy as observed existed when the universe was merely 380 million years old. All of the newly discovered galaxies formed more than 13 billion years ago, when the universe was just about 4 percent of its present age, a period astronomers call the "cosmic dawn," when the first galaxies were born. The universe is now 13.7 billion years old.

The new observations span a period between 350 million and 600 million years after the Big Bang and represent the first reliable census of galaxies at such an early time in cosmic history, the team says. The astronomers found that the number of galaxies steadily increased as time went on, supporting the idea that the first galaxies didn't form in a sudden burst but gradually assembled their stars.

Because it takes light billions of years to travel such vast distances, astronomical images show how the universe looked during the period, billions of years ago, when that light first embarked on its journey. The farther away astronomers peer into space, the further back in time they are looking.

In the new study, which was recently accepted for publication in the Astrophysical Journal Letters, the team has explored the deepest reaches of the cosmos—and therefore the most distant past—that has ever been studied with Hubble.

"We've made the longest exposure that Hubble has ever taken, capturing some of the faintest and most distant galaxies," says Richard Ellis, the Steele Family Professor of Astronomy at Caltech and the first author of the paper. "The added depth and our carefully designed observing strategy have been the key features of our campaign to reliably probe this early period of cosmic history."

The results are the first from a new Hubble survey that focused on a small patch of sky known as the Hubble Ultra Deep Field (HUDF), which was first studied nine years ago. The astronomers used Hubble's Wide Field Camera 3 (WFC3) to observe the HUDF in near-infrared light over a period of six weeks during August and September 2012.

To determine the distances to these galaxies, the team measured their colors using four filters that allow Hubble to capture near-infrared light at specific wavelengths. "We employed a filter that has not been used in deep imaging before, and undertook much deeper exposures in some filters than in earlier work, in order to convincingly reject the possibility that some of our galaxies might be foreground objects," says team member James Dunlop of the Institute for Astronomy at the University of Edinburgh.

The carefully chosen filters allowed the astronomers to measure the light that was absorbed by neutral hydrogen, which filled the universe beginning about 400,000 years after the Big Bang. Stars and galaxies started to form roughly 200 million years after the Big Bang. As they did, they bathed the cosmos with ultraviolet light, which ionized the neutral hydrogen by stripping an electron from each hydrogen atom. This so-called "epoch of reionization" lasted until the universe was about a billion years old.

If everything in the universe were stationary, astronomers would see that only a specific wavelength of light was absorbed by neutral hydrogen. But the universe is expanding, and this stretches the wavelengths of light coming from galaxies. The amount that the light is stretched—called the redshift—depends on distance: the farther away a galaxy is, the greater the redshift.

As a result of this cosmic expansion, astronomers observe that the absorption of light by neutral hydrogen occurs at longer wavelengths for more distant galaxies. The filters enabled the researchers to determine at which wavelength the light was absorbed; this revealed the distance to the galaxy—and therefore the period in cosmic history when it is being formed. Using this technique to penetrate further and further back in time, the team found a steadily decreasing number of galaxies.

"Our data confirms that reionization is a drawn-out process occurring over several hundred million years with galaxies slowly building up their stars and chemical elements," says coauthor Brant Robertson of the University of Arizona in Tucson. "There wasn't a single dramatic moment when galaxies formed; it's a gradual process."

The new observations—which pushed Hubble to its technical limits—hint at what is to come with next-generation infrared space telescopes, the researchers say. To probe even further back in time to see ever more primitive galaxies, astronomers will need to observe in wavelengths longer than those that can be detected by Hubble. That's because cosmic expansion has stretched the light from the most distant galaxies so much that they glow predominantly in the infrared. The upcoming James Webb Space Telescope, slated for launch in a few years, will target those galaxies.

"Although we may have reached back as far as Hubble will see, Hubble has, in a sense, set the stage for Webb," says team member Anton Koekemoer of the Space Telescope Science Institute in Baltimore. "Our work indicates there is a rich field of even earlier galaxies that Webb will be able to study."

The title of the Astrophysical Journal Letters paper is, "The Abundance of Star-Forming Galaxies in the Redshift Range 8.5 to 12: New Results from the 2012 Hubble Ultra Deep Field Campaign." In addition to Ellis, Dunlop, Robertson, and Koekemoer, the other authors on the Astrophysical Journal Letters paper are Matthew Schenker of Caltech; Ross McLure, Rebecca Bowler, Alexander Rogers, Emma Curtis-Lake, and Michele Cirasuolo of the Institute for Astronomy at the University of Edinburgh; Yoshiaki Ono and Masami Ouchi of the University of Tokyo; Evan Schneider of the University of Arizona; Daniel Stark of the University of Cambridge; Stéphane Charlot of the Institut d'Astrophysique de Paris; and Steven Furlanetto of UCLA. The research was supported by the Space Telescope Science Institute, the European Research Council, the Royal Society, and the Leverhulme Trust.

Science Contacts:

Richard Ellis, Steele Professor of Astronomy
rse@astro.caltech.edu
(626) 676-5530

Matt Schenker, graduate student
schenker@astro.caltech.edu
(516) 428-0587

Writer: 
Marcus Woo
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Social Synchronicity

New Caltech-led research finds a connection between bonding and matched movements

PASADENA, Calif.—Humans have a tendency to spontaneously synchronize their movements. For example, the footsteps of two friends walking together may synchronize, although neither individual is consciously aware that it is happening. Similarly, the clapping hands of an audience will naturally fall into synch. Although this type of synchronous body movement has been observed widely, its neurological mechanism and its role in social interactions remain obscure. In a new study, led by cognitive neuroscientists at the California Institute of Technology (Caltech), researchers found that body-movement synchronization between two participants increases following a short session of cooperative training, suggesting that our ability to synchronize body movements is a measurable indicator of social interaction.

"Our findings may provide a powerful tool for identifying the neural underpinnings of both normal social interactions and impaired social interactions, such as the deficits that are often associated with autism," says Shinsuke Shimojo, Gertrude Baltimore Professor of Experimental Psychology at Caltech and senior author of the study.

Shimojo, along with former postdoctoral scholar Kyongsik Yun, and Katsumi Watanabe, an associate professor at the University of Tokyo, presented their work in a paper published December 11 in Scientific Reports, an online and open-access journal from the Nature Publishing Group.

For their study, the team evaluated the hypothesis that synchronous body movement is the basis for more explicit social interaction by measuring the amount of fingertip movement between two participants who were instructed to extend their arms and point their index fingers toward one another—much like the famous scene in E.T. between the alien and Elliott. They were explicitly instructed to keep their own fingers as stationary as possible while keeping their eyes open. The researchers simultaneously recorded the neuronal activity of each participant using electroencephalography, or EEG, recordings. Their finger positions in space were recorded by a motion-capture system.

The participants repeated the task eight times; the first two rounds were called pretraining sessions and the last two were posttraining sessions. The four sessions in between were the cooperative training sessions, in which one person—a randomly chosen leader—made a sequence of large finger movements, and the other participant was instructed to follow the movements. In the posttraining sessions, finger-movement correlation between the two participants was significantly higher compared to that in the pretraining sessions. In addition, socially and sensorimotor-related brain areas were more synchronized between the brains, but not within the brain, in the posttraining sessions. According to the researchers, this experiment, while simple, is novel in that it allows two participants to interact subconsciously while the amount of movement that could potentially disrupt measurement of the neural signal is minimized.

"The most striking outcome of our study is that not only the body-body synchrony but also the brain-brain synchrony between the two participants increased after a short period of social interaction," says Yun. "This may open new vistas to study the brain-brain interface. It appears that when a cooperative relationship exists, two brains form a loose dynamic system."

The team says this information may be potentially useful for romantic or business partner selection.

"Because we can quantify implicit social bonding between two people using our experimental paradigm, we may be able to suggest a more socially compatible partnership in order to maximize matchmaking success rates, by preexamining body synchrony and its increase during a short cooperative session" explains Yun.

As part of the study, the team also surveyed the subjects to rank certain social personality traits, which they then compared to individual rates of increased body synchrony. For example, they found that the participants who expressed the most social anxiety showed the smallest increase in synchrony after cooperative training, while those who reported low levels of anxiety had the highest increases in synchrony. The researchers plan to further evaluate the nature of the direct causal relationship between synchronous body movement and social bonding. Further studies may explore whether a more complex social interaction, such as singing together or being teamed up in a group game, increases synchronous body movements among the participants.

"We may also apply our experimental protocol to better understand the nature and the neural correlates of social impairment in disorders where social deficits are a common symptom, as in schizophrenia or autism," says Shimojo.

The title of the Scientific Reports paper is "Interpersonal body and neural synchronization as a marker of implicit social interaction." Funding for this research was provided by the Japan Science and Technology Agency's CREST and the Tamagawa-Caltech gCOE (global Center Of Excellence) programs.

Writer: 
Katie Neith
Frontpage Title: 
Social Synchronicity: A Connection Between Bonding and Matched Movements
Listing Title: 
Social Synchronicity: A Connection Between Bonding and Matched Movements
Writer: 
Exclude from News Hub: 
No

Top 12 in 2012

Slideshow: 
Credit: Benjamin Deverman/Caltech

Gene therapy for boosting nerve-cell repair

Caltech scientists have developed a gene therapy that helps the brain replace its nerve-cell-protecting myelin sheaths—and the cells that produce those sheaths—when they are destroyed by diseases like multiple sclerosis and by spinal-cord injuries. Myelin ensures that nerve cells can send signals quickly and efficiently.

Credit: L. Moser and P. M. Bellan, Caltech

Understanding solar flares

By studying jets of plasma in the lab, Caltech researchers discovered a surprising phenomenon that may be important for understanding how solar flares occur and for developing nuclear fusion as an energy source. Solar flares are bursts of energy from the sun that launch chunks of plasma that can damage orbiting satellites and cause the northern and southern lights on Earth.

Coincidence—or physics?

Caltech planetary scientists provided a new explanation for why the "man in the moon" faces Earth. Their research indicates that the "man"—an illusion caused by dark-colored volcanic plains—faces us because of the rate at which the moon's spin rate slowed before becoming locked in its current orientation, even though the odds favored the moon's other, more mountainous side.

Choking when the stakes are high

In studying brain activity and behavior, Caltech biologists and social scientists learned that the more someone is afraid of loss, the worse they will perform on a given task—and that, the more loss-averse they are, the more likely it is that their performance will peak at a level far below their actual capacity.

Credit: NASA/JPL-Caltech

Eyeing the X-ray universe

NASA's NuSTAR telescope, a Caltech-led and -designed mission to explore the high-energy X-ray universe and to uncover the secrets of black holes, of remnants of dead stars, of energetic cosmic explosions, and even of the sun, was launched on June 13. The instrument is the most powerful high-energy X-ray telescope ever developed and will produce images that are 10 times sharper than any that have been taken before at these energies.

Credit: CERN

Uncovering the Higgs Boson

This summer's likely discovery of the long-sought and highly elusive Higgs boson, the fundamental particle that is thought to endow elementary particles with mass, was made possible in part by contributions from a large contingent of Caltech researchers. They have worked on this problem with colleagues around the globe for decades, building experiments, designing detectors to measure particles ever more precisely, and inventing communication systems and data storage and transfer networks to share information among thousands of physicists worldwide.

Credit: Peter Day

Amplifying research

Researchers at Caltech and NASA's Jet Propulsion Laboratory developed a new kind of amplifier that can be used for everything from exploring the cosmos to examining the quantum world. This new device operates at a frequency range more than 10 times wider than that of other similar kinds of devices, can amplify strong signals without distortion, and introduces the lowest amount of unavoidable noise.

Swims like a jellyfish

Caltech bioengineers partnered with researchers at Harvard University to build a freely moving artificial jellyfish from scratch. The researchers fashioned the jellyfish from silicon and muscle cells into what they've dubbed Medusoid; in the lab, the scientists were able to replicate some of the jellyfish's key mechanical functions, such as swimming and creating feeding currents. The work will help improve researchers' understanding of tissues and how they work, and may inform future efforts in tissue engineering and the design of pumps for the human heart.

Credit: NASA/JPL-Caltech

Touchdown confirmed

After more than eight years of planning, about 354 million miles of space travel, and seven minutes of terror, NASA's Mars Science Laboratory successfully landed on the Red Planet on August 5. The roving analytical laboratory, named Curiosity, is now using its 10 scientific instruments and 17 cameras to search Mars for environments that either were once—or are now—habitable.

Credit: Caltech/Michael Hoffmann

Powering toilets for the developing world

Caltech engineers built a solar-powered toilet that can safely dispose of human waste for just five cents per use per day. The toilet design, which won the Bill and Melinda Gates Foundation's Reinventing the Toilet Challenge, uses the sun to power a reactor that breaks down water and human waste into fertilizer and hydrogen. The hydrogen can be stored as energy in hydrogen fuel cells.

Credit: Caltech / Scott Kelberg and Michael Roukes

Weighing molecules

A Caltech-led team of physicists created the first-ever mechanical device that can measure the mass of an individual molecule. The tool could eventually help doctors to diagnose diseases, and will enable scientists to study viruses, examine the molecular machinery of cells, and better measure nanoparticles and air pollution.

Splitting water

This year, two separate Caltech research groups made key advances in the quest to extract hydrogen from water for energy use. In June, a team of chemical engineers devised a nontoxic, noncorrosive way to split water molecules at relatively low temperatures; this method may prove useful in the application of waste heat to hydrogen production. Then, in September, a group of Caltech chemists identified the mechanism by which some water-splitting catalysts work; their findings should light the way toward the development of cheaper and better catalysts.

Body: 

In 2012, Caltech faculty and students pursued research into just about every aspect of our world and beyond—from understanding human behavior, to exploring other planets, to developing sustainable waste solutions for the developing world.

In other words, 2012 was another year of discovery at Caltech. Here are a dozen research stories, which were among the most widely read and shared articles from Caltech.edu.

Did we skip your favorite? Connect with Caltech on Facebook to share your pick.

A New Tool for Secret Agents—And the Rest of Us

Caltech engineers make tiny, low-cost, terahertz imager chip

PASADENA, Calif.—A secret agent is racing against time. He knows a bomb is nearby. He rounds a corner, spots a pile of suspicious boxes in the alleyway, and pulls out his cell phone. As he scans it over the packages, their contents appear onscreen. In the nick of time, his handy smartphone application reveals an explosive device, and the agent saves the day. 

Sound far-fetched? In fact it is a real possibility, thanks to tiny inexpensive silicon microchips developed by a pair of electrical engineers at the California Institute of Technology (Caltech). The chips generate and radiate high-frequency electromagnetic waves, called terahertz (THz) waves, that fall into a largely untapped region of the electromagnetic spectrum—between microwaves and far-infrared radiation—and that can penetrate a host of materials without the ionizing damage of X-rays. 

When incorporated into handheld devices, the new microchips could enable a broad range of applications in fields ranging from homeland security to wireless communications to health care, and even touchless gaming. In the future, the technology may lead to noninvasive cancer diagnosis, among other applications.

"Using the same low-cost, integrated-circuit technology that's used to make the microchips found in our cell phones and notepads today, we have made a silicon chip that can operate at nearly 300 times their speed," says Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering at Caltech. "These chips will enable a new generation of extremely versatile sensors." 

Hajimiri and postdoctoral scholar Kaushik Sengupta (PhD '12) describe the work in the December issue of IEEE Journal of Solid-State Circuits

Researchers have long touted the potential of the terahertz frequency range, from 0.3 to 3 THz, for scanning and imaging. Such electromagnetic waves can easily penetrate packaging materials and render image details in high resolution, and can also detect the chemical fingerprints of pharmaceutical drugs, biological weapons, or illegal drugs or explosives. However, most existing terahertz systems involve bulky and expensive laser setups that sometimes require exceptionally low temperatures. The potential of terahertz imaging and scanning has gone untapped because of the lack of compact, low-cost technology that can operate in the frequency range.

To finally realize the promise of terahertz waves, Hajimiri and Sengupta used complementary metal-oxide semiconductor, or CMOS, technology, which is commonly used to make the microchips in everyday electronic devices, to design silicon chips with fully integrated functionalities and that operate at terahertz frequencies—but fit on a fingertip.

"This extraordinary level of creativity, which has enabled imaging in the terahertz frequency range, is very much in line with Caltech's long tradition of innovation in the area of CMOS technology," says Ares Rosakis, chair of Caltech's Division of Engineering and Applied Science. "Caltech engineers, like Ali Hajimiri, truly work in an interdisciplinary way to push the boundaries of what is possible."

The new chips boast signals more than a thousand times stronger than existing approaches, and emanate terahertz signals that can be dynamically programmed to point in a specified direction, making them the world's first integrated terahertz scanning arrays.

Using the scanner, the researchers can reveal a razor blade hidden within a piece of plastic, for example, or determine the fat content of chicken tissue. "We are not just talking about a potential. We have actually demonstrated that this works," says Hajimiri. "The first time we saw the actual images, it took our breath away." 

Hajimiri and Sengupta had to overcome multiple hurdles to translate CMOS technology into workable terahertz chips—including the fact that silicon chips are simply not designed to operate at terahertz frequencies. In fact, every transistor has a frequency, known as the cut-off frequency, above which it fails to amplify a signal—and no standard transistors can amplify signals in the terahertz range. 

To work around the cut-off-frequency problem, the researchers harnessed the collective strength of many transistors operating in unison. If multiple elements are operated at the right times at the right frequencies, their power can be combined, boosting the strength of the collective signal. 

"We came up with a way of operating transistors above their cut-off frequencies," explains Sengupta. "We are about 40 or 50 percent above the cut-off frequencies, and yet we are able to generate a lot of power and detect it because of our novel methodologies."

"Traditionally, people have tried to make these technologies work at very high frequencies, with large elements producing the power. Think of these as elephants," says Hajimiri. "Nowadays we can make a very large number of transistors that individually are not very powerful, but when combined and working in unison, can do a lot more. If these elements are synchronized—like an army of ants—they can do everything that the elephant does and then some."

The researchers also figured out how to radiate, or transmit, the terahertz signal once it has been produced. At such high frequencies, a wire cannot be used, and traditional antennas at the microchip scale are inefficient. What they came up with instead was a way to turn the whole silicon chip into an antenna. Again, they went with a distributed approach, incorporating many small metal segments onto the chip that can all be operated at a certain time and strength to radiate the signal en masse.

"We had to take a step back and ask, 'Can we do this in a different way?'" says Sengupta. "Our chips are an example of the kind of innovations that can be unearthed if we blur the partitions between traditional ways of thinking about integrated circuits, electromagnetics, antennae, and the applied sciences. It is a holistic solution."

 The paper is titled "A 0.28 THz Power-Generation and Beam-Steering Array in CMOS Based on Distributed Active Radiators." IBM helped with chip fabrication for this work.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Point of Light

Caltech engineers invent light-focusing device that may lead to applications in computing, communications, and imaging

PASADENA, Calif.—As technology advances, it tends to shrink. From cell phones to laptops—powered by increasingly faster and tinier processors—everything is getting thinner and sleeker. And now light beams are getting smaller, too.

Engineers at the California Institute of Technology (Caltech) have created a device that can focus light into a point just a few nanometers (billionths of a meter) across—an achievement they say may lead to next-generation applications in computing, communications, and imaging.

Because light can carry greater amounts of data more efficiently than electrical signals traveling through copper wires, today's technology is increasingly based on optics. The world is already connected by thousands of miles of optical-fiber cables that deliver email, images, and the latest video gone viral to your laptop.

As we all produce and consume more data, computers and communication networks must be able to handle the deluge of information. Focusing light into tinier spaces can squeeze more data through optical fibers and increase bandwidth. Moreover, by being able to control light at such small scales, optical devices can also be made more compact, requiring less energy to power them.

But focusing light to such minute scales is inherently difficult. Once you reach sizes smaller than the wavelength of light—a few hundred nanometers in the case of visible light—you reach what's called the diffraction limit, and it's physically impossible to focus the light any further.

But now the Caltech researchers, co-led by assistant professor of electrical engineering Hyuck Choo, have built a new kind of waveguide—a tunnellike device that channels light—that gets around this natural limit. The waveguide, which is described in a recent issue of the journal Nature Photonics, is made of amorphous silicon dioxide—which is similar to common glass—and is covered in a thin layer of gold. Just under two microns long, the device is a rectangular box that tapers to a point at one end.

As light is sent through the waveguide, the photons interact with electrons at the interface between the gold and the silicon dioxide. Those electrons oscillate, and the oscillations propagate along the device as waves—similarly to how vibrations of air molecules travel as sound waves. Because the electron oscillations are directly coupled with the light, they carry the same information and properties—and they therefore serve as a proxy for the light.

Instead of focusing the light alone—which is impossible due to the diffraction limit—the new device focuses these coupled electron oscillations, called surface plasmon polaritons (SPPs). The SPPs travel through the waveguide and are focused as they go through the pointy end.

Because the new device is built on a semiconductor chip with standard nanofabrication techniques, says Choo, the co-lead and the co-corresponding author of the paper, it is easy integrate with today's technology

Previous on-chip nanofocusing devices were only able to focus light into a narrow line. They also were inefficient, typically focusing only a few percent of the incident photons, with the majority absorbed and scattered as they traveled through the devices.

With the new device, light can ultimately be focused in three dimensions, producing a point a few nanometers across, and using half of the light that's sent through, Choo says. (Focusing the light into a slightly bigger spot, 14 by 80 nanometers in size, boosts the efficiency to 70 percent). The key feature behind the device's focusing ability and efficiency, he says, is its unique design and shape.

"Our new device is based on fundamental research, but we hope it's a good building block for many potentially revolutionary engineering applications," says Myung-Ki Kim, a postdoctoral scholar and the other lead author of the paper.

For example, one application is to turn this nanofocusing device into an efficient, high-resolution biological-imaging instrument, Kim says. A biologist can dye specific molecules in a cell with fluorescent proteins that glow when struck by light. Using the new device, a scientist can focus light into the cell, causing the fluorescent proteins to shine. Because the device concentrates light into such a small point, it can create a high-resolution map of those dyed molecules. Light can also travel in the reverse direction through the nanofocuser: by collecting light through the narrow point, the device turns into a high-resolution microscope. 

The device can also lead to computer hard drives that hold more memory via heat-assisted magnetic recording. Normal hard drives consist of rows of tiny magnets whose north and south poles lay end to end. Data is recorded by applying a magnetic field to switch the polarity of the magnets.

Smaller magnets would allow more memory to be squeezed into a disc of a given size. But the polarities of smaller magnets made of current materials are unstable at room temperature, causing the magnetic poles to spontaneously flip—and for data to be lost. Instead, more stable materials can be used—but those require heat to record data. The heat makes the magnets more susceptible to polarity reversals. Therefore, to write data, a laser is needed to heat the individual magnets, allowing a surrounding magnetic field to flip their polarities.

Today's technology, however, can't focus a laser into a beam that is narrow enough to individually heat such tiny magnets. Indeed, current lasers can only concentrate a beam to an area 300 nanometers wide, which would heat the target magnet as well as adjacent ones—possibly spoiling other recorded data.

Because the new device can focus light down to such small scales, it can heat smaller magnets individually, making it possible for hard drives to pack more magnets and therefore more memory. With current technology, discs can't hold more than 1 terabyte (1,000 gigabytes) per square inch. A nanofocusing device, Choo says, can bump that to 50 terabytes per square inch.

Then there's the myriad of data-transfer and communication applications, the researchers say. As computing becomes increasingly reliant on optics, devices that concentrate and control data-carrying light at the nanoscale will be essential—and ubiquitous, says Choo, who is a member of the Kavli Nanoscience Institute at Caltech. "Don't be surprised if you see a similar kind of device inside a computer you may someday buy."

The next step is to optimize the design and to begin building imaging instruments and sensors, Choo says. The device is versatile enough that relatively simple modifications could allow it to be used for imaging, computing, or communication.

The title of the Nature Photonics paper is "Nanofocusing in a metal-insulator-metal gap plasmon waveguide with a three-dimensional linear taper." In addition to Choo and Kim, the other authors are Matteo Staffaroni, Tae Joon Seok, Jeffrey Bokor, Ming C. Wu, and Eli Yablonovitch of UC Berkeley and Stefano Cabrini and P. James Schuck of the Molecular Foundry at Lawrence Berkeley National Lab. The research was funded by the Defense Advanced Research Projects Agency (DARPA) Science and Technology Surface-Enhanced Raman Spectroscopy program, the Department of Energy, and the Division of Engineering and Applied Science at Caltech.

 

 

 

 

 

 

 

 

 

 

 

 

This video shows the final fabrication step of the nanofocusing device. A stream of high-energy gallium ions blasts away unwanted layers of gold and silicon dioxide to carve out the shape of the device.

 

 

Writer: 
Marcus Woo
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

3-D Dentistry

A Caltech imaging innovation will ease your trip to the dentist and may soon energize home entertainment systems too.

Although dentistry has come a long way since the time when decayed teeth were extracted by brute force, most dentists are still using the clumsy, time-consuming, and imperfect impression method when making crowns or bridges. But that process could soon go the way of general anesthesia in family dentistry thanks to a 3-D imaging device developed by Mory Gharib, Caltech vice provost and Hans W. Liepmann Professor of Aeronautics and professor of bioinspired engineering.

By the mid-2000s, complex dental imaging machines—also called dental scanners—began appearing on the market. The devices take pictures of teeth that can be used to create crowns and bridges via computer-aided design/computer-aided manufacturing (CAD/CAM) techniques, giving the patient a new tooth the same day. But efficiency doesn't come without cost—and at more than $100,000 for an entire system, few dentists can afford to invest in the equipment. Within that challenge, Gharib saw an opportunity.

An expert in biomedical engineering, Gharib had built a 3-D microscope in 2006 to help him design better artificial heart valves and other devices for medical applications. Since it's not very practical to view someone's mouth through a microscope, he thought that he could design and build an affordable and portable 3-D camera that would do the same job as the expensive dental scanners.

The system he came up with is surprisingly simple. The camera, which fits into a handheld device, has three apertures that take a picture of the tooth at the same time but from different angles. The three images are then blended together using a computer algorithm to construct a 3-D image. In 2009, Gharib formed a company called Arges Imaging to commercialize the product; last year, Arges was acquired by a multinational dental-technology manufacturer that has been testing the camera with dentists.

"Professor Gharib is as brilliant a scientist as he is an engineer and inventor," says Ares Rosakis, chair of Caltech's division of engineering and applied science. "I think that's what we have to do to look at humanity's big problems: we have to be ready to act as pure scientists when we observe and discover as well as act as practical engineers when we invent and apply. This continuous interplay happens at Caltech better than at other institutions."

Indeed, Gharib did not stop with dental applications for his 3-D scanner, but quickly realized that the technology had promise in other industries. For example, there are many potential applications in consumer electronics and other products, he says. While motion-sensing devices with facial and voice-recognition capabilities, like Microsoft's Kinect for the Xbox 360, allow players to feel like they are in the game— running, jumping, and flying over obstacles—"the gestures required are extreme," says Gharib. A more sophisticated imager could make players really feel like they are part of the action.

In robotic and reconstructive surgery, a 3-D imager could provide surgeons with a tool to help them achieve better accuracy and precision. "What if I could take a 3-D picture of your head and have a machine sculpt it into a bust?" says Gharib. "With CAD/CAM, you can take a computer design and turn that into a sculpture, but you need someone who is expert at programming. What if a camera could take a photo and give you 3-D perspective? We have expensive 3-D motion-picture cameras now and 3-D displays, but we don't have much media for them," says Gharib, who earlier this year formed a new company called Apertura Imaging to try to improve the 3-D imaging technology for these nondental applications. "Once we build this new camera, people will come up with all sorts of applications," he says.

Writer: 
Michael Rogers
Writer: 
Exclude from News Hub: 
No

More Evidence for an Ancient Grand Canyon

Caltech study supports theory that giant gorge dates back to Late Cretaceous period

For over 150 years, geologists have debated how and when one of the most dramatic features on our planet—the Grand Canyon—was formed. New data unearthed by researchers at the California Institute of Technology (Caltech) builds support for the idea that conventional models, which say the enormous ravine is 5 to 6 million years old, are way off.

In fact, the Caltech research points to a Grand Canyon that is many millions of years older than previously thought, says Kenneth A. Farley, Keck Foundation Professor of Geochemistry at Caltech and coauthor of the study. "Rather than being formed within the last few million years, our measurements suggest that a deep canyon existed more than 70 million years ago," he says.

Farley and Rebecca Flowers—a former postdoctoral scholar at Caltech who is now an assistant professor at the University of Colorado, Boulder—outlined their findings in a paper published in the November 29 issue of Science Express.

Building upon previous research by Farley's lab that showed that parts of the eastern canyon are likely to be at least 55 million years old, the team used a new method to test ancient rocks found at the bottom of the canyon's western section. Past experiments used the amount of helium produced by radioactive decay in apatite—a mineral found in the canyon's walls—to date the samples. This time around, Farley and Flowers took a closer look at the apatite grains by analyzing not only the amount but also the spatial distribution of helium atoms that were trapped within the crystals of the mineral as they moved closer to the surface of the earth during the massive erosion that caused the Grand Canyon to form.

Rocks buried in the earth are hot—with temperatures increasing by about 25 degrees Celsius for every kilometer of depth—but as a river canyon erodes the surface downwards towards a buried rock, that rock cools. The thermal history—shown by the helium distribution in the apatite grains—gives important clues about how much time has passed since there was significant erosion in the canyon.   

"If you can document cooling through temperatures only a few degrees warmer than the earth's surface, you can learn about canyon formation," says Farley, who is also chair of the Division of Geological and Planetary Sciences at Caltech.

The analysis of the spatial distribution of helium allowed for detection of variations in the thermal structure at shallow levels of Earth's crust, says Flowers. That gave the team dates that enabled them to fine-tune the timeframe when the Grand Canyon was incised, or cut.

"Our research implies that the Grand Canyon was directly carved to within a few hundred meters of its modern depth by about 70 million years ago," she says.

Now that they have narrowed down the "when" of the Grand Canyon's formation, the geologists plan to continue investigations into how it took shape. The genesis of the canyon has important implications for understanding the evolution of many geological features in the western United States, including their tectonics and topography, according to the team.

"Our major scientific objective is to understand the history of the Colorado Plateau—why does this large and unusual geographic feature exist, and when was it formed," says Farley. "A canyon cannot form without high elevation—you don't cut canyons in rocks below sea level. Also, the details of the canyon's incision seem to suggest large-scale changes in surface topography, possibly including large-scale tilting of the plateau."

"Apatite 4He/3He and (U-Th)/He evidence for an ancient Grand Canyon" appears in the November 29 issue of the journal Science Express. Funding for the research was provided by the National Science Foundation. 

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Reducing 20/20 Hindsight Bias

PASADENA, Calif.—You probably know it as Monday-morning quarterbacking or 20/20 hindsight: failures often look obvious and predictable after the fact—whether it's an interception thrown by a quarterback under pressure, a surgeon's mistake, a slow response to a natural disaster, or friendly fire in the fog of war.

In legal settings, this tendency to underestimate the challenges faced by someone else—called hindsight bias—can lead to unfair judgments, punishing people who made an honest, unavoidable mistake.

"Hindsight bias is fueled by the fact that you weren't there—you didn't see the fog and confusion," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at the California Institute of Technology (Caltech). Furthermore, hindsight bias exists even if you were there. The bias is strong enough to alter your own memories, giving you an inflated sense that you saw the result coming. "We know a lot about the nature of these types of judgmental biases," he says. "But in the past, they weren't understood well enough to prevent them."

In a new study, recently published online in the journal Psychological Science, a team led by Camerer and Shinsuke Shimojo, the Gertrude Baltimore Professor of Experimental Psychology, not only found a way to predict the severity of the bias, but also identified a technique that successfully reduces it—a strategy that could help produce fairer assessments in situations such as medical malpractice suits and reviewing police or military actions.

Hindsight bias likely stems from the fact that when given new information, the brain tends to file away the old data and ignore it, Camerer explains. Once we know the outcome of a decision or event, we can't easily retrieve those old files, so we can't accurately evaluate something after the fact. The wide-ranging influence of hindsight bias has been observed in many previous studies, but research into the underlying mechanisms is difficult because these kinds of judgment are complex.

But by using experimental techniques from behavioral economics and visual psychophysics—the study of how visual stimuli affect perception—the Caltech researchers say they were able to probe more deeply into how hindsight emerges during decision making.

In the study, the researchers gave volunteers a basic visual task: to look for humans in blurry pictures. The visual system is among the most heavily studied parts of the brain, and researchers have developed many techniques and tools to understand it. In particular, the Caltech experiment used eye-tracking methods to monitor where the subjects were looking as they evaluated the photos, giving the researchers a window into the subjects' thought processes.

Subjects were divided into those who would do the task—the "performers"—and those who would judge the performers after the fact—the "evaluators." The performers saw a series of blurry photos and were told to guess which ones had humans in them. The evaluators' job was to estimate how many performers guessed correctly for each picture. To examine hindsight bias, some evaluators were shown clear versions of the photos before they saw the blurry photos—a situation analogous to how a jury in a medical malpractice case would already know the correct diagnosis before seeing the X-ray evidence.

The experiment found clear hindsight bias. Evaluators who had been primed by a clear photo greatly overestimated the percentage of people who would correctly identify the human. In other words, because the evaluators already knew the answer, they thought the task was easier than it really was. Furthermore, the measurements were similar to those from the first study of hindsight bias in 1975, which examined how people evaluated the probabilities of various geopolitical events before and after President Nixon's trip to China and the USSR. The fact that the results between such disparate kinds of studies are so consistent shows that the high-level thinking involved in the earlier study and the low-level processes of visual perception in the new study are connected, the researchers say.

In the second part of the study, the researchers tracked the subjects' eye movements and found that hindsight bias depended on how the performers and evaluators inspected the photos. Evaluators were often looking at different parts of the photos compared to the performers, and when that happened there was more hindsight bias. But when both groups' gazes fell on similar locations on the photos, the evaluators were less biased. Seeing the wandering gazes of the first group as they tried to make sense of the blurry images seemed to allow the evaluators to internalize the first group's struggles. In other words, when the two groups literally saw eye to eye, the evaluators were less biased and gave a more accurate estimate of the first group's success rate.

Based on these results, the researchers suspected that if they could show the evaluators where people in the first group had looked—indicated by dots jiggling on the screen—then perhaps the evaluators' gazes would be drawn there as well, reducing any potential hindsight bias. When they did the experiment, that's exactly what happened.

Other studies have shown that merely telling people that they should be aware of hindsight bias is not effective, Camerer says. Something more tangible—such as dots that draw the evaluators' attention—is needed.

Although the experiments were done in a very specific context, the researchers say that these results may be used to reduce hindsight bias in real-life situations. "We think it's a very promising step toward engineering something useful," Camerer says.

For example, eye-tracking technology could be used to record how doctors evaluate X-ray or MRI images. If a doctor happens to make a mistake, showing eye-tracking data could reduce hindsight bias when determining whether the error was honest and unavoidable or if the doctor was negligent. Lowering the likelihood of hindsight bias, Camerer says, could also decrease defensive medicine, in which doctors perform excessive and costly procedures—or decline doing a procedure altogether—for fear of being sued for malpractice even when they have done nothing wrong.

As technology advances, our activities are being increasingly monitored and recorded, says Daw-An Wu, the first author of the paper and a former postdoctoral scholar at Caltech who now works at the Caltech Brain Imaging Center. But the study shows that having visual records alone doesn't solve the problem of fair and unbiased accountability. "For there to be some fair judgment afterward, you would hope that the other component of reality is also being recorded—which is not just what is seen, but how people look at it," he says.

The Psychological Science paper is titled "Shared Visual Attention Reduces Hindsight Bias." In addition to Camerer, Shimojo, and Wu, the other author is Stephanie Wang, a former postdoctoral scholar at Caltech who is now an assistant professor at the University of Pittsburgh. This research collaboration was initiated and funded by Trilience Research, with additional support from the Gordon and Betty Moore Foundation, the Tamagawa-Caltech Global COE Program, and the CREST program of the Japan Science and Technology Agency.

Writer: 
Marcus Woo
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

High-Energy Physicists Smash Records for Network Data Transfer

New methods for efficient use of long-range networks will support cutting-edge science

PASADENA, Calif.—Physicists led by the California Institute of Technology (Caltech) have smashed yet another series of records for data-transfer speed. The international team of high-energy physicists, computer scientists, and network engineers reached a transfer rate of 339 gigabits per second (Gbps)—equivalent to moving four million gigabytes (or one million full length movies) per day, nearly doubling last year's record. The team also reached a new record for a two-way transfer on a single link by sending data at 187 Gbps between Victoria, Canada, and Salt Lake City.

The achievements, the researchers say, pave the way for the next level of data-intensive science—in fields such as high-energy physics, astrophysics, genomics, meteorology, and global climate tracking. For example, last summer's discovery at the Large Hadron Collider (LHC) in Geneva of a new particle that may be the long-sought Higgs boson was made possible by a global network of computational and data-storage facilities that transferred more than 100 petabytes (100 million gigabytes) of data in the past year alone. As the LHC continues to slam protons together at higher rates and with more energy, the experiments will produce an even larger flood of data—reaching the exabyte range (a billion gigabytes).

The researchers, led by Caltech, the University of Victoria, and the University of Michigan, together with Brookhaven National Lab, Vanderbilt University, and other partners, demonstrated their achievement at the SuperComputing 2012 (SC12) conference, November 12–16 in Salt Lake City. They used wide-area network circuits connecting Caltech, the University of Victoria Computing Center in British Columbia, the University of Michigan, and the Salt Palace Convention Center in Utah. While setting the records, they also demonstrated other state-of-the-art methods such as software-defined intercontinental networks and direct interconnections between computer memories over the network between Pasadena and Salt Lake City.

"By sharing our methods and tools with scientists in many fields, we aim to further enable the next round of scientific discoveries, taking full advantage of 100-Gbps networks now, and higher-speed networks in the near future," says Harvey Newman, professor of physics at Caltech and the leader of the team. "In particular, we hope that these developments will afford physicists and students throughout the world the opportunity to participate directly in the LHC's next round of discoveries as they emerge."

As the demand for "Big Data" continues to grow exponentially—both in major science projects and in the world at large—the team says they look forward to next year's round of tests using network and data-storage technologies that are just beginning to emerge. Armed with these new technologies and methods, the Caltech team estimates that they may reach 1 terabit-per-second (a thousand gbps) data transfers over long-range networks by next fall. 

More information about the demonstration can be found at http://supercomputing.caltech.edu/.
 

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news