Top 12 in 2012

Frontpage Title: 
Top 12 in 2012
Slideshow: 
Credit: Benjamin Deverman/Caltech

Gene therapy for boosting nerve-cell repair

Caltech scientists have developed a gene therapy that helps the brain replace its nerve-cell-protecting myelin sheaths—and the cells that produce those sheaths—when they are destroyed by diseases like multiple sclerosis and by spinal-cord injuries. Myelin ensures that nerve cells can send signals quickly and efficiently.

Credit: L. Moser and P. M. Bellan, Caltech

Understanding solar flares

By studying jets of plasma in the lab, Caltech researchers discovered a surprising phenomenon that may be important for understanding how solar flares occur and for developing nuclear fusion as an energy source. Solar flares are bursts of energy from the sun that launch chunks of plasma that can damage orbiting satellites and cause the northern and southern lights on Earth.

Coincidence—or physics?

Caltech planetary scientists provided a new explanation for why the "man in the moon" faces Earth. Their research indicates that the "man"—an illusion caused by dark-colored volcanic plains—faces us because of the rate at which the moon's spin rate slowed before becoming locked in its current orientation, even though the odds favored the moon's other, more mountainous side.

Choking when the stakes are high

In studying brain activity and behavior, Caltech biologists and social scientists learned that the more someone is afraid of loss, the worse they will perform on a given task—and that, the more loss-averse they are, the more likely it is that their performance will peak at a level far below their actual capacity.

Credit: NASA/JPL-Caltech

Eyeing the X-ray universe

NASA's NuSTAR telescope, a Caltech-led and -designed mission to explore the high-energy X-ray universe and to uncover the secrets of black holes, of remnants of dead stars, of energetic cosmic explosions, and even of the sun, was launched on June 13. The instrument is the most powerful high-energy X-ray telescope ever developed and will produce images that are 10 times sharper than any that have been taken before at these energies.

Credit: CERN

Uncovering the Higgs Boson

This summer's likely discovery of the long-sought and highly elusive Higgs boson, the fundamental particle that is thought to endow elementary particles with mass, was made possible in part by contributions from a large contingent of Caltech researchers. They have worked on this problem with colleagues around the globe for decades, building experiments, designing detectors to measure particles ever more precisely, and inventing communication systems and data storage and transfer networks to share information among thousands of physicists worldwide.

Credit: Peter Day

Amplifying research

Researchers at Caltech and NASA's Jet Propulsion Laboratory developed a new kind of amplifier that can be used for everything from exploring the cosmos to examining the quantum world. This new device operates at a frequency range more than 10 times wider than that of other similar kinds of devices, can amplify strong signals without distortion, and introduces the lowest amount of unavoidable noise.

Swims like a jellyfish

Caltech bioengineers partnered with researchers at Harvard University to build a freely moving artificial jellyfish from scratch. The researchers fashioned the jellyfish from silicon and muscle cells into what they've dubbed Medusoid; in the lab, the scientists were able to replicate some of the jellyfish's key mechanical functions, such as swimming and creating feeding currents. The work will help improve researchers' understanding of tissues and how they work, and may inform future efforts in tissue engineering and the design of pumps for the human heart.

Credit: NASA/JPL-Caltech

Touchdown confirmed

After more than eight years of planning, about 354 million miles of space travel, and seven minutes of terror, NASA's Mars Science Laboratory successfully landed on the Red Planet on August 5. The roving analytical laboratory, named Curiosity, is now using its 10 scientific instruments and 17 cameras to search Mars for environments that either were once—or are now—habitable.

Credit: Caltech/Michael Hoffmann

Powering toilets for the developing world

Caltech engineers built a solar-powered toilet that can safely dispose of human waste for just five cents per use per day. The toilet design, which won the Bill and Melinda Gates Foundation's Reinventing the Toilet Challenge, uses the sun to power a reactor that breaks down water and human waste into fertilizer and hydrogen. The hydrogen can be stored as energy in hydrogen fuel cells.

Credit: Caltech / Scott Kelberg and Michael Roukes

Weighing molecules

A Caltech-led team of physicists created the first-ever mechanical device that can measure the mass of an individual molecule. The tool could eventually help doctors to diagnose diseases, and will enable scientists to study viruses, examine the molecular machinery of cells, and better measure nanoparticles and air pollution.

Splitting water

This year, two separate Caltech research groups made key advances in the quest to extract hydrogen from water for energy use. In June, a team of chemical engineers devised a nontoxic, noncorrosive way to split water molecules at relatively low temperatures; this method may prove useful in the application of waste heat to hydrogen production. Then, in September, a group of Caltech chemists identified the mechanism by which some water-splitting catalysts work; their findings should light the way toward the development of cheaper and better catalysts.

Body: 

In 2012, Caltech faculty and students pursued research into just about every aspect of our world and beyond—from understanding human behavior, to exploring other planets, to developing sustainable waste solutions for the developing world.

In other words, 2012 was another year of discovery at Caltech. Here are a dozen research stories, which were among the most widely read and shared articles from Caltech.edu.

Did we skip your favorite? Connect with Caltech on Facebook to share your pick.

Exclude from News Hub: 
Yes

A New Tool for Secret Agents—And the Rest of Us

Caltech engineers make tiny, low-cost, terahertz imager chip

PASADENA, Calif.—A secret agent is racing against time. He knows a bomb is nearby. He rounds a corner, spots a pile of suspicious boxes in the alleyway, and pulls out his cell phone. As he scans it over the packages, their contents appear onscreen. In the nick of time, his handy smartphone application reveals an explosive device, and the agent saves the day. 

Sound far-fetched? In fact it is a real possibility, thanks to tiny inexpensive silicon microchips developed by a pair of electrical engineers at the California Institute of Technology (Caltech). The chips generate and radiate high-frequency electromagnetic waves, called terahertz (THz) waves, that fall into a largely untapped region of the electromagnetic spectrum—between microwaves and far-infrared radiation—and that can penetrate a host of materials without the ionizing damage of X-rays. 

When incorporated into handheld devices, the new microchips could enable a broad range of applications in fields ranging from homeland security to wireless communications to health care, and even touchless gaming. In the future, the technology may lead to noninvasive cancer diagnosis, among other applications.

"Using the same low-cost, integrated-circuit technology that's used to make the microchips found in our cell phones and notepads today, we have made a silicon chip that can operate at nearly 300 times their speed," says Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering at Caltech. "These chips will enable a new generation of extremely versatile sensors." 

Hajimiri and postdoctoral scholar Kaushik Sengupta (PhD '12) describe the work in the December issue of IEEE Journal of Solid-State Circuits

Researchers have long touted the potential of the terahertz frequency range, from 0.3 to 3 THz, for scanning and imaging. Such electromagnetic waves can easily penetrate packaging materials and render image details in high resolution, and can also detect the chemical fingerprints of pharmaceutical drugs, biological weapons, or illegal drugs or explosives. However, most existing terahertz systems involve bulky and expensive laser setups that sometimes require exceptionally low temperatures. The potential of terahertz imaging and scanning has gone untapped because of the lack of compact, low-cost technology that can operate in the frequency range.

To finally realize the promise of terahertz waves, Hajimiri and Sengupta used complementary metal-oxide semiconductor, or CMOS, technology, which is commonly used to make the microchips in everyday electronic devices, to design silicon chips with fully integrated functionalities and that operate at terahertz frequencies—but fit on a fingertip.

"This extraordinary level of creativity, which has enabled imaging in the terahertz frequency range, is very much in line with Caltech's long tradition of innovation in the area of CMOS technology," says Ares Rosakis, chair of Caltech's Division of Engineering and Applied Science. "Caltech engineers, like Ali Hajimiri, truly work in an interdisciplinary way to push the boundaries of what is possible."

The new chips boast signals more than a thousand times stronger than existing approaches, and emanate terahertz signals that can be dynamically programmed to point in a specified direction, making them the world's first integrated terahertz scanning arrays.

Using the scanner, the researchers can reveal a razor blade hidden within a piece of plastic, for example, or determine the fat content of chicken tissue. "We are not just talking about a potential. We have actually demonstrated that this works," says Hajimiri. "The first time we saw the actual images, it took our breath away." 

Hajimiri and Sengupta had to overcome multiple hurdles to translate CMOS technology into workable terahertz chips—including the fact that silicon chips are simply not designed to operate at terahertz frequencies. In fact, every transistor has a frequency, known as the cut-off frequency, above which it fails to amplify a signal—and no standard transistors can amplify signals in the terahertz range. 

To work around the cut-off-frequency problem, the researchers harnessed the collective strength of many transistors operating in unison. If multiple elements are operated at the right times at the right frequencies, their power can be combined, boosting the strength of the collective signal. 

"We came up with a way of operating transistors above their cut-off frequencies," explains Sengupta. "We are about 40 or 50 percent above the cut-off frequencies, and yet we are able to generate a lot of power and detect it because of our novel methodologies."

"Traditionally, people have tried to make these technologies work at very high frequencies, with large elements producing the power. Think of these as elephants," says Hajimiri. "Nowadays we can make a very large number of transistors that individually are not very powerful, but when combined and working in unison, can do a lot more. If these elements are synchronized—like an army of ants—they can do everything that the elephant does and then some."

The researchers also figured out how to radiate, or transmit, the terahertz signal once it has been produced. At such high frequencies, a wire cannot be used, and traditional antennas at the microchip scale are inefficient. What they came up with instead was a way to turn the whole silicon chip into an antenna. Again, they went with a distributed approach, incorporating many small metal segments onto the chip that can all be operated at a certain time and strength to radiate the signal en masse.

"We had to take a step back and ask, 'Can we do this in a different way?'" says Sengupta. "Our chips are an example of the kind of innovations that can be unearthed if we blur the partitions between traditional ways of thinking about integrated circuits, electromagnetics, antennae, and the applied sciences. It is a holistic solution."

 The paper is titled "A 0.28 THz Power-Generation and Beam-Steering Array in CMOS Based on Distributed Active Radiators." IBM helped with chip fabrication for this work.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Point of Light

Caltech engineers invent light-focusing device that may lead to applications in computing, communications, and imaging

PASADENA, Calif.—As technology advances, it tends to shrink. From cell phones to laptops—powered by increasingly faster and tinier processors—everything is getting thinner and sleeker. And now light beams are getting smaller, too.

Engineers at the California Institute of Technology (Caltech) have created a device that can focus light into a point just a few nanometers (billionths of a meter) across—an achievement they say may lead to next-generation applications in computing, communications, and imaging.

Because light can carry greater amounts of data more efficiently than electrical signals traveling through copper wires, today's technology is increasingly based on optics. The world is already connected by thousands of miles of optical-fiber cables that deliver email, images, and the latest video gone viral to your laptop.

As we all produce and consume more data, computers and communication networks must be able to handle the deluge of information. Focusing light into tinier spaces can squeeze more data through optical fibers and increase bandwidth. Moreover, by being able to control light at such small scales, optical devices can also be made more compact, requiring less energy to power them.

But focusing light to such minute scales is inherently difficult. Once you reach sizes smaller than the wavelength of light—a few hundred nanometers in the case of visible light—you reach what's called the diffraction limit, and it's physically impossible to focus the light any further.

But now the Caltech researchers, co-led by assistant professor of electrical engineering Hyuck Choo, have built a new kind of waveguide—a tunnellike device that channels light—that gets around this natural limit. The waveguide, which is described in a recent issue of the journal Nature Photonics, is made of amorphous silicon dioxide—which is similar to common glass—and is covered in a thin layer of gold. Just under two microns long, the device is a rectangular box that tapers to a point at one end.

As light is sent through the waveguide, the photons interact with electrons at the interface between the gold and the silicon dioxide. Those electrons oscillate, and the oscillations propagate along the device as waves—similarly to how vibrations of air molecules travel as sound waves. Because the electron oscillations are directly coupled with the light, they carry the same information and properties—and they therefore serve as a proxy for the light.

Instead of focusing the light alone—which is impossible due to the diffraction limit—the new device focuses these coupled electron oscillations, called surface plasmon polaritons (SPPs). The SPPs travel through the waveguide and are focused as they go through the pointy end.

Because the new device is built on a semiconductor chip with standard nanofabrication techniques, says Choo, the co-lead and the co-corresponding author of the paper, it is easy integrate with today's technology

Previous on-chip nanofocusing devices were only able to focus light into a narrow line. They also were inefficient, typically focusing only a few percent of the incident photons, with the majority absorbed and scattered as they traveled through the devices.

With the new device, light can ultimately be focused in three dimensions, producing a point a few nanometers across, and using half of the light that's sent through, Choo says. (Focusing the light into a slightly bigger spot, 14 by 80 nanometers in size, boosts the efficiency to 70 percent). The key feature behind the device's focusing ability and efficiency, he says, is its unique design and shape.

"Our new device is based on fundamental research, but we hope it's a good building block for many potentially revolutionary engineering applications," says Myung-Ki Kim, a postdoctoral scholar and the other lead author of the paper.

For example, one application is to turn this nanofocusing device into an efficient, high-resolution biological-imaging instrument, Kim says. A biologist can dye specific molecules in a cell with fluorescent proteins that glow when struck by light. Using the new device, a scientist can focus light into the cell, causing the fluorescent proteins to shine. Because the device concentrates light into such a small point, it can create a high-resolution map of those dyed molecules. Light can also travel in the reverse direction through the nanofocuser: by collecting light through the narrow point, the device turns into a high-resolution microscope. 

The device can also lead to computer hard drives that hold more memory via heat-assisted magnetic recording. Normal hard drives consist of rows of tiny magnets whose north and south poles lay end to end. Data is recorded by applying a magnetic field to switch the polarity of the magnets.

Smaller magnets would allow more memory to be squeezed into a disc of a given size. But the polarities of smaller magnets made of current materials are unstable at room temperature, causing the magnetic poles to spontaneously flip—and for data to be lost. Instead, more stable materials can be used—but those require heat to record data. The heat makes the magnets more susceptible to polarity reversals. Therefore, to write data, a laser is needed to heat the individual magnets, allowing a surrounding magnetic field to flip their polarities.

Today's technology, however, can't focus a laser into a beam that is narrow enough to individually heat such tiny magnets. Indeed, current lasers can only concentrate a beam to an area 300 nanometers wide, which would heat the target magnet as well as adjacent ones—possibly spoiling other recorded data.

Because the new device can focus light down to such small scales, it can heat smaller magnets individually, making it possible for hard drives to pack more magnets and therefore more memory. With current technology, discs can't hold more than 1 terabyte (1,000 gigabytes) per square inch. A nanofocusing device, Choo says, can bump that to 50 terabytes per square inch.

Then there's the myriad of data-transfer and communication applications, the researchers say. As computing becomes increasingly reliant on optics, devices that concentrate and control data-carrying light at the nanoscale will be essential—and ubiquitous, says Choo, who is a member of the Kavli Nanoscience Institute at Caltech. "Don't be surprised if you see a similar kind of device inside a computer you may someday buy."

The next step is to optimize the design and to begin building imaging instruments and sensors, Choo says. The device is versatile enough that relatively simple modifications could allow it to be used for imaging, computing, or communication.

The title of the Nature Photonics paper is "Nanofocusing in a metal-insulator-metal gap plasmon waveguide with a three-dimensional linear taper." In addition to Choo and Kim, the other authors are Matteo Staffaroni, Tae Joon Seok, Jeffrey Bokor, Ming C. Wu, and Eli Yablonovitch of UC Berkeley and Stefano Cabrini and P. James Schuck of the Molecular Foundry at Lawrence Berkeley National Lab. The research was funded by the Defense Advanced Research Projects Agency (DARPA) Science and Technology Surface-Enhanced Raman Spectroscopy program, the Department of Energy, and the Division of Engineering and Applied Science at Caltech.

 

 

 

 

 

 

 

 

 

 

 

 

This video shows the final fabrication step of the nanofocusing device. A stream of high-energy gallium ions blasts away unwanted layers of gold and silicon dioxide to carve out the shape of the device.

 

 

Writer: 
Marcus Woo
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

3-D Dentistry

A Caltech imaging innovation will ease your trip to the dentist and may soon energize home entertainment systems too.

Although dentistry has come a long way since the time when decayed teeth were extracted by brute force, most dentists are still using the clumsy, time-consuming, and imperfect impression method when making crowns or bridges. But that process could soon go the way of general anesthesia in family dentistry thanks to a 3-D imaging device developed by Mory Gharib, Caltech vice provost and Hans W. Liepmann Professor of Aeronautics and professor of bioinspired engineering.

By the mid-2000s, complex dental imaging machines—also called dental scanners—began appearing on the market. The devices take pictures of teeth that can be used to create crowns and bridges via computer-aided design/computer-aided manufacturing (CAD/CAM) techniques, giving the patient a new tooth the same day. But efficiency doesn't come without cost—and at more than $100,000 for an entire system, few dentists can afford to invest in the equipment. Within that challenge, Gharib saw an opportunity.

An expert in biomedical engineering, Gharib had built a 3-D microscope in 2006 to help him design better artificial heart valves and other devices for medical applications. Since it's not very practical to view someone's mouth through a microscope, he thought that he could design and build an affordable and portable 3-D camera that would do the same job as the expensive dental scanners.

The system he came up with is surprisingly simple. The camera, which fits into a handheld device, has three apertures that take a picture of the tooth at the same time but from different angles. The three images are then blended together using a computer algorithm to construct a 3-D image. In 2009, Gharib formed a company called Arges Imaging to commercialize the product; last year, Arges was acquired by a multinational dental-technology manufacturer that has been testing the camera with dentists.

"Professor Gharib is as brilliant a scientist as he is an engineer and inventor," says Ares Rosakis, chair of Caltech's division of engineering and applied science. "I think that's what we have to do to look at humanity's big problems: we have to be ready to act as pure scientists when we observe and discover as well as act as practical engineers when we invent and apply. This continuous interplay happens at Caltech better than at other institutions."

Indeed, Gharib did not stop with dental applications for his 3-D scanner, but quickly realized that the technology had promise in other industries. For example, there are many potential applications in consumer electronics and other products, he says. While motion-sensing devices with facial and voice-recognition capabilities, like Microsoft's Kinect for the Xbox 360, allow players to feel like they are in the game— running, jumping, and flying over obstacles—"the gestures required are extreme," says Gharib. A more sophisticated imager could make players really feel like they are part of the action.

In robotic and reconstructive surgery, a 3-D imager could provide surgeons with a tool to help them achieve better accuracy and precision. "What if I could take a 3-D picture of your head and have a machine sculpt it into a bust?" says Gharib. "With CAD/CAM, you can take a computer design and turn that into a sculpture, but you need someone who is expert at programming. What if a camera could take a photo and give you 3-D perspective? We have expensive 3-D motion-picture cameras now and 3-D displays, but we don't have much media for them," says Gharib, who earlier this year formed a new company called Apertura Imaging to try to improve the 3-D imaging technology for these nondental applications. "Once we build this new camera, people will come up with all sorts of applications," he says.

Writer: 
Michael Rogers
Writer: 
Exclude from News Hub: 
No

More Evidence for an Ancient Grand Canyon

Caltech study supports theory that giant gorge dates back to Late Cretaceous period

For over 150 years, geologists have debated how and when one of the most dramatic features on our planet—the Grand Canyon—was formed. New data unearthed by researchers at the California Institute of Technology (Caltech) builds support for the idea that conventional models, which say the enormous ravine is 5 to 6 million years old, are way off.

In fact, the Caltech research points to a Grand Canyon that is many millions of years older than previously thought, says Kenneth A. Farley, Keck Foundation Professor of Geochemistry at Caltech and coauthor of the study. "Rather than being formed within the last few million years, our measurements suggest that a deep canyon existed more than 70 million years ago," he says.

Farley and Rebecca Flowers—a former postdoctoral scholar at Caltech who is now an assistant professor at the University of Colorado, Boulder—outlined their findings in a paper published in the November 29 issue of Science Express.

Building upon previous research by Farley's lab that showed that parts of the eastern canyon are likely to be at least 55 million years old, the team used a new method to test ancient rocks found at the bottom of the canyon's western section. Past experiments used the amount of helium produced by radioactive decay in apatite—a mineral found in the canyon's walls—to date the samples. This time around, Farley and Flowers took a closer look at the apatite grains by analyzing not only the amount but also the spatial distribution of helium atoms that were trapped within the crystals of the mineral as they moved closer to the surface of the earth during the massive erosion that caused the Grand Canyon to form.

Rocks buried in the earth are hot—with temperatures increasing by about 25 degrees Celsius for every kilometer of depth—but as a river canyon erodes the surface downwards towards a buried rock, that rock cools. The thermal history—shown by the helium distribution in the apatite grains—gives important clues about how much time has passed since there was significant erosion in the canyon.   

"If you can document cooling through temperatures only a few degrees warmer than the earth's surface, you can learn about canyon formation," says Farley, who is also chair of the Division of Geological and Planetary Sciences at Caltech.

The analysis of the spatial distribution of helium allowed for detection of variations in the thermal structure at shallow levels of Earth's crust, says Flowers. That gave the team dates that enabled them to fine-tune the timeframe when the Grand Canyon was incised, or cut.

"Our research implies that the Grand Canyon was directly carved to within a few hundred meters of its modern depth by about 70 million years ago," she says.

Now that they have narrowed down the "when" of the Grand Canyon's formation, the geologists plan to continue investigations into how it took shape. The genesis of the canyon has important implications for understanding the evolution of many geological features in the western United States, including their tectonics and topography, according to the team.

"Our major scientific objective is to understand the history of the Colorado Plateau—why does this large and unusual geographic feature exist, and when was it formed," says Farley. "A canyon cannot form without high elevation—you don't cut canyons in rocks below sea level. Also, the details of the canyon's incision seem to suggest large-scale changes in surface topography, possibly including large-scale tilting of the plateau."

"Apatite 4He/3He and (U-Th)/He evidence for an ancient Grand Canyon" appears in the November 29 issue of the journal Science Express. Funding for the research was provided by the National Science Foundation. 

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Reducing 20/20 Hindsight Bias

PASADENA, Calif.—You probably know it as Monday-morning quarterbacking or 20/20 hindsight: failures often look obvious and predictable after the fact—whether it's an interception thrown by a quarterback under pressure, a surgeon's mistake, a slow response to a natural disaster, or friendly fire in the fog of war.

In legal settings, this tendency to underestimate the challenges faced by someone else—called hindsight bias—can lead to unfair judgments, punishing people who made an honest, unavoidable mistake.

"Hindsight bias is fueled by the fact that you weren't there—you didn't see the fog and confusion," says Colin Camerer, the Robert Kirby Professor of Behavioral Economics at the California Institute of Technology (Caltech). Furthermore, hindsight bias exists even if you were there. The bias is strong enough to alter your own memories, giving you an inflated sense that you saw the result coming. "We know a lot about the nature of these types of judgmental biases," he says. "But in the past, they weren't understood well enough to prevent them."

In a new study, recently published online in the journal Psychological Science, a team led by Camerer and Shinsuke Shimojo, the Gertrude Baltimore Professor of Experimental Psychology, not only found a way to predict the severity of the bias, but also identified a technique that successfully reduces it—a strategy that could help produce fairer assessments in situations such as medical malpractice suits and reviewing police or military actions.

Hindsight bias likely stems from the fact that when given new information, the brain tends to file away the old data and ignore it, Camerer explains. Once we know the outcome of a decision or event, we can't easily retrieve those old files, so we can't accurately evaluate something after the fact. The wide-ranging influence of hindsight bias has been observed in many previous studies, but research into the underlying mechanisms is difficult because these kinds of judgment are complex.

But by using experimental techniques from behavioral economics and visual psychophysics—the study of how visual stimuli affect perception—the Caltech researchers say they were able to probe more deeply into how hindsight emerges during decision making.

In the study, the researchers gave volunteers a basic visual task: to look for humans in blurry pictures. The visual system is among the most heavily studied parts of the brain, and researchers have developed many techniques and tools to understand it. In particular, the Caltech experiment used eye-tracking methods to monitor where the subjects were looking as they evaluated the photos, giving the researchers a window into the subjects' thought processes.

Subjects were divided into those who would do the task—the "performers"—and those who would judge the performers after the fact—the "evaluators." The performers saw a series of blurry photos and were told to guess which ones had humans in them. The evaluators' job was to estimate how many performers guessed correctly for each picture. To examine hindsight bias, some evaluators were shown clear versions of the photos before they saw the blurry photos—a situation analogous to how a jury in a medical malpractice case would already know the correct diagnosis before seeing the X-ray evidence.

The experiment found clear hindsight bias. Evaluators who had been primed by a clear photo greatly overestimated the percentage of people who would correctly identify the human. In other words, because the evaluators already knew the answer, they thought the task was easier than it really was. Furthermore, the measurements were similar to those from the first study of hindsight bias in 1975, which examined how people evaluated the probabilities of various geopolitical events before and after President Nixon's trip to China and the USSR. The fact that the results between such disparate kinds of studies are so consistent shows that the high-level thinking involved in the earlier study and the low-level processes of visual perception in the new study are connected, the researchers say.

In the second part of the study, the researchers tracked the subjects' eye movements and found that hindsight bias depended on how the performers and evaluators inspected the photos. Evaluators were often looking at different parts of the photos compared to the performers, and when that happened there was more hindsight bias. But when both groups' gazes fell on similar locations on the photos, the evaluators were less biased. Seeing the wandering gazes of the first group as they tried to make sense of the blurry images seemed to allow the evaluators to internalize the first group's struggles. In other words, when the two groups literally saw eye to eye, the evaluators were less biased and gave a more accurate estimate of the first group's success rate.

Based on these results, the researchers suspected that if they could show the evaluators where people in the first group had looked—indicated by dots jiggling on the screen—then perhaps the evaluators' gazes would be drawn there as well, reducing any potential hindsight bias. When they did the experiment, that's exactly what happened.

Other studies have shown that merely telling people that they should be aware of hindsight bias is not effective, Camerer says. Something more tangible—such as dots that draw the evaluators' attention—is needed.

Although the experiments were done in a very specific context, the researchers say that these results may be used to reduce hindsight bias in real-life situations. "We think it's a very promising step toward engineering something useful," Camerer says.

For example, eye-tracking technology could be used to record how doctors evaluate X-ray or MRI images. If a doctor happens to make a mistake, showing eye-tracking data could reduce hindsight bias when determining whether the error was honest and unavoidable or if the doctor was negligent. Lowering the likelihood of hindsight bias, Camerer says, could also decrease defensive medicine, in which doctors perform excessive and costly procedures—or decline doing a procedure altogether—for fear of being sued for malpractice even when they have done nothing wrong.

As technology advances, our activities are being increasingly monitored and recorded, says Daw-An Wu, the first author of the paper and a former postdoctoral scholar at Caltech who now works at the Caltech Brain Imaging Center. But the study shows that having visual records alone doesn't solve the problem of fair and unbiased accountability. "For there to be some fair judgment afterward, you would hope that the other component of reality is also being recorded—which is not just what is seen, but how people look at it," he says.

The Psychological Science paper is titled "Shared Visual Attention Reduces Hindsight Bias." In addition to Camerer, Shimojo, and Wu, the other author is Stephanie Wang, a former postdoctoral scholar at Caltech who is now an assistant professor at the University of Pittsburgh. This research collaboration was initiated and funded by Trilience Research, with additional support from the Gordon and Betty Moore Foundation, the Tamagawa-Caltech Global COE Program, and the CREST program of the Japan Science and Technology Agency.

Writer: 
Marcus Woo
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

High-Energy Physicists Smash Records for Network Data Transfer

New methods for efficient use of long-range networks will support cutting-edge science

PASADENA, Calif.—Physicists led by the California Institute of Technology (Caltech) have smashed yet another series of records for data-transfer speed. The international team of high-energy physicists, computer scientists, and network engineers reached a transfer rate of 339 gigabits per second (Gbps)—equivalent to moving four million gigabytes (or one million full length movies) per day, nearly doubling last year's record. The team also reached a new record for a two-way transfer on a single link by sending data at 187 Gbps between Victoria, Canada, and Salt Lake City.

The achievements, the researchers say, pave the way for the next level of data-intensive science—in fields such as high-energy physics, astrophysics, genomics, meteorology, and global climate tracking. For example, last summer's discovery at the Large Hadron Collider (LHC) in Geneva of a new particle that may be the long-sought Higgs boson was made possible by a global network of computational and data-storage facilities that transferred more than 100 petabytes (100 million gigabytes) of data in the past year alone. As the LHC continues to slam protons together at higher rates and with more energy, the experiments will produce an even larger flood of data—reaching the exabyte range (a billion gigabytes).

The researchers, led by Caltech, the University of Victoria, and the University of Michigan, together with Brookhaven National Lab, Vanderbilt University, and other partners, demonstrated their achievement at the SuperComputing 2012 (SC12) conference, November 12–16 in Salt Lake City. They used wide-area network circuits connecting Caltech, the University of Victoria Computing Center in British Columbia, the University of Michigan, and the Salt Palace Convention Center in Utah. While setting the records, they also demonstrated other state-of-the-art methods such as software-defined intercontinental networks and direct interconnections between computer memories over the network between Pasadena and Salt Lake City.

"By sharing our methods and tools with scientists in many fields, we aim to further enable the next round of scientific discoveries, taking full advantage of 100-Gbps networks now, and higher-speed networks in the near future," says Harvey Newman, professor of physics at Caltech and the leader of the team. "In particular, we hope that these developments will afford physicists and students throughout the world the opportunity to participate directly in the LHC's next round of discoveries as they emerge."

As the demand for "Big Data" continues to grow exponentially—both in major science projects and in the world at large—the team says they look forward to next year's round of tests using network and data-storage technologies that are just beginning to emerge. Armed with these new technologies and methods, the Caltech team estimates that they may reach 1 terabit-per-second (a thousand gbps) data transfers over long-range networks by next fall. 

More information about the demonstration can be found at http://supercomputing.caltech.edu/.
 

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

An Eye for Science: In the Lab of Markus Meister

Take one look around Markus Meister's new lab and office space on the top floor of the Beckman Behavioral Biology building, and you can tell that he has an eye for detail. Curving, luminescent walls change color every few seconds, wrapping around lab space and giving a warm glow to the open, airy offices that line the east wall. A giant picture of neurons serves as wallpaper, and a column is wrapped in an image from the inside of a retina. And while he may have picked up some tips from his architect wife to help direct the design of his lab, Meister is the true visionary—a biologist studying the details of the eye.

"Since we study the visual system, light is a natural interest for me," says Meister, of the lab he helped plan before joining the Caltech faculty in July. "The architecture team responded to this in so many creative ways. They even installed windows inside the lab space, so that researchers at the bench could see straight through the offices to the nature outside."

Exactly how our eyes process those images of the world around us forms the basis of Meister's research. In particular, he investigates the circuits of nerve cells in the retina—a light-sensitive tissue that lines the inner surface of your eyeball and essentially captures images as they come in through the cornea and lens at the front of your eye. A traditional view of the retina is that it acts like the film in a camera, simply absorbing light and then sending a signal directly to the brain about the image you are viewing.

But Meister, who earned his PhD at Caltech in 1987 and was the Tarr Professor of Molecular and Cellular Biology at Harvard before moving back west, sees things a bit differently.

"There is a lot of preprocessing that occurs in the retina, so the signal that gets sent to the brain ultimately is quite different from the raw image that is projected onto the retina by the lens in your eye," he says. "And it's not just one kind of processing. There are on the order of 20 different ways in which the retina manipulates a raw image and then sends those results onto the brain through the optic nerve. An ongoing puzzle is trying to figure out how that is possible with the limited circuitry that exists in the retina."

Meister and his lab are dedicated to finding new clues to help decode that puzzle. In a recent study, published in Nature Neuroscience, he and Hiroki Asari, a postdoctoral scholar in biology at Caltech, studied the connections between particular cells within the retina. Their specific discovery, he explains, has to do with the associations between bipolar cells, which are the neurons in the middle of the retina, and ganglion cells, which are the very last neurons in the retina that send signals to the brain. What they found is that the connections between these bipolar cells and ganglion cells are much more diverse than had been expected.

"Each upstream bipolar cell can make different neural circuits to do particular kinds of computations before it sends signals to ganglion cells," says Asari, who began his postdoctoral work at Harvard and moved to Caltech with the Meister lab.

The team was also able to show that in many cases, the processing of information in the retina involves amacrine cells, a type of cell in the eye that seems to be involved in fine-tuning the properties of individual bipolar cell actions.

"It's a little bit like electronics, where you have a transistor—one wire controlling the connection between two other wires—that is absolutely central to everything," says Meister. "In a way, this connection between the amacrine cells and the bipolar cell and the ganglion cell looks a little bit like a transistor, in that the amacrine cell can control how the signal flows from the bipolar to the ganglion cell. That's an analogy that I think will help us understand the intricacy of the signal flow in the retina. A goal that we have is to ultimately understand these neural circuits in the same way that we understand electronic circuits."

The next step in this particular line of research is to figure out exactly where the amacrine cells are making their impact on bipolar cells. They believe most of the action happens at the synapses, the connection points between the cells. Studying this area requires new technology to get a good look at the tiny connectors. Luckily, Meister's new lab includes an in-house shop room—complete with a milling machine, a band saw, and other power tools needed to build things like microscopes.

"In this lab, we're doing things on many levels—from the giant milling machine all the way down to measurements on the micron scale," says Meister.

He also plans to expand his research focus. The team has started to study the visual behavior of mice, evaluating, for example, the kinds of innate reactions—those that don't require any other knowledge about the environment—they have to certain visual stimuli. Ultimately, the researchers would like to know which of the pathways that come out of the retina control which behaviors, and if they can find a link between the processing of vision that occurs early in the eye and how the animal functions in its environment using its visual system.

"In my new lab at Caltech, I'm trying to branch out further into the visual system to leverage the understanding we have about the front end—namely processing in the retina—to better understand the different actions that occur in the brain, all the way to certain behaviors of the animal that are based on visual stimuli," he says.

Meister says that he's also excited to get back into an environment that's more focused on math, physics, and engineering—something he hopes to take advantage of at both the faculty/colleague level and the student level.

"Our research subject is one that I feel connects me with so many different areas of science—from molecular genetics to theoretical physics," he explains. "You can rely on a wide range of collaborators and people who are interested in different aspects of the subject. To me, that's been the most satisfying part of my career. I have collaborative projects with neurosurgeons, a theoretical physicist who develops models of visual processing, a particle physicist who builds miniature detector electronics, a molecular genetics expert. These interactions really keep you broadly connected."

And he's hoping to connect to even more people now that he's settled in his Beckman Behavioral Biology lab—even if it's just for a friendly visit.

"I know we're in a remote corner of the building at the north end of the top floor, but we try to keep our door open at all times," says Meister. "There are only five people in the lab right now, and it gets kind of lonely. We're going to build the group up, but in the meantime it would be nice if people came to visit us."

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Nano Insights Could Lead to Improved Nuclear Reactors

Caltech researchers examine self-healing abilities of some materials

PASADENA, Calif.—In order to build the next generation of nuclear reactors, materials scientists are trying to unlock the secrets of certain materials that are radiation-damage tolerant. Now researchers at the California Institute of Technology (Caltech) have brought new understanding to one of those secrets—how the interfaces between two carefully selected metals can absorb, or heal, radiation damage.

"When it comes to selecting proper structural materials for advanced nuclear reactors, it is crucial that we understand radiation damage and its effects on materials properties. And we need to study these effects on isolated small-scale features," says Julia R. Greer, an assistant professor of materials science and mechanics at Caltech. With that in mind, Greer and colleagues from Caltech, Sandia National Laboratories, UC Berkeley, and Los Alamos National Laboratory have taken a closer look at radiation-induced damage, zooming in all the way to the nanoscale—where lengths are measured in billionths of meters. Their results appear online in the journals Advanced Functional Materials and Small.

During nuclear irradiation, energetic particles like neutrons and ions displace atoms from their regular lattice sites within the metals that make up a reactor, setting off cascades of collisions that ultimately damage materials such as steel. One of the byproducts of this process is the formation of helium bubbles. Since helium does not dissolve within solid materials, it forms pressurized gas bubbles that can coalesce, making the material porous, brittle, and therefore susceptible to breakage.  

Some nano-engineered materials are able to resist such damage and may, for example, prevent helium bubbles from coalescing into larger voids. For instance, some metallic nanolaminates—materials made up of extremely thin alternating layers of different metals—are able to absorb various types of radiation-induced defects at the interfaces between the layers because of the mismatch that exists between their crystal structures.

"People have an idea, from computations, of what the interfaces as a whole may be doing, and they know from experiments what their combined global effect is. What they don't know is what exactly one individual interface is doing and what specific role the nanoscale dimensions play," says Greer. "And that's what we were able to investigate."

Peri Landau and Guo Qiang, both postdoctoral scholars in Greer's lab at the time of this study, used a chemical procedure called electroplating to either grow miniature pillars of pure copper or pillars containing exactly one interface—in which an iron crystal sits atop a copper crystal. Then, working with partners at Sandia and Los Alamos, in order to replicate the effect of helium irradiation, they implanted those nanopillars with helium ions, both directly at the interface and, in separate experiments, throughout the pillar.

The researchers then used a one-of-a-kind nanomechanical testing instrument, called the SEMentor, which is located in the subbasement of the W. M. Keck Engineering Laboratories building at Caltech, to both compress the tiny pillars and pull on them as a way to learn about the mechanical properties of the pillars—how their length changed when a certain stress was applied, and where they broke, for example. 

"These experiments are very, very delicate," Landau says. "If you think about it, each one of the pillars—which are only 100 nanometers wide and about 700 nanometers long—is a thousand times thinner than a single strand of hair. We can only see them with high-resolution microscopes."

The team found that once they inserted a small amount of helium into a pillar at the interface between the iron and copper crystals, the pillar's strength increased by more than 60 percent compared to a pillar without helium. That much was expected, Landau explains, because "irradiation hardening is a well-known phenomenon in bulk materials." However, she notes, such hardening is typically linked with embrittlement, "and we do not want materials to be brittle."

Surprisingly, the researchers found that in their nanopillars, the increase in strength did not come along with embrittlement, either when the helium was implanted at the interface, or when it was distributed more broadly. Indeed, Greer and her team found, the material was able to maintain its ductility because the interface itself was able to deform gradually under stress.

This means that in a metallic nanolaminate material, small helium bubbles are able to migrate to an interface, which is never more than a few tens of nanometers away, essentially healing the material. "What we're showing is that it doesn't matter if the bubble is within the interface or uniformly distributed—the pillars don't ever fail in a catastrophic, abrupt fashion," Greer says. She notes that the implanted helium bubbles—which are described in the Advanced Functional Materials paper—were one to two nanometers in diameter; in future studies, the group will repeat the experiment with larger bubbles at higher temperatures in order to represent additional conditions related to radiation damage.

In the Small paper, the researchers showed that even nanopillars made entirely of copper, with no layering of metals, exhibited irradiation-induced hardening. That stands in stark contrast to the results from previous work by other researchers on proton-irradiated copper nanopillars, which exhibited the same strengths as those that had not been irradiated. Greer says that this points to the need to evaluate different types of irradiation-induced defects at the nanoscale, because they may not all have the same effects on materials.

While no one is likely to be building nuclear reactors out of nanopillars anytime soon, Greer argues that it is important to understand how individual interfaces and nanostructures behave. "This work is basically teaching us what gives materials the ability to heal radiation damage—what tolerances they have and how to design them," she says. That information can be incorporated into future models of material behavior that can help with the design of new materials.

Along with Greer, Landau, and Qiang, Khalid Hattar of Sandia National Laboratories is also a coauthor on the paper "The Effect of He Implantation on the Tensile Properties and Microstructure of Cu/Fe Nano-bicrystals," which appears online in Advanced Functional Materials. Peter Hosemann of UC Berkeley and Yongqiang Wang of Los Alamos National Laboratory are coauthors on the paper "Helium Implantation Effects on the Compressive Response of Cu Nanopillars," which appears online in the journal Small. The work was supported by the U.S. Department of Energy and carried out, in part, in the Kavli Nanoscience Institute at Caltech.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

A Fresh Look at Psychiatric Drugs

Caltech researchers propose a new approach to understanding common treatments

Drugs for psychiatric disorders such as depression and schizophrenia often require weeks to take full effect. "What takes so long?" has formed one of psychiatry's most stubborn mysteries. Now a fresh look at previous research on quite a different drug—nicotine—is providing answers. The new ideas may point the way toward new generations of psychiatric drugs that work faster and better.

For several years, Henry Lester, Bren Professor of Biology at Caltech, and his colleagues have worked to understand nicotine addiction by repeatedly exposing nerve cells to the drug and studying the effects. At first glance, it's a simple story: nicotine binds to, and activates, specific nicotine receptors on the surface of nerve cells within a few seconds of being inhaled. But nicotine addiction develops over weeks or months; and so the Caltech team wanted to know what changes in the nerve cell during that time, hidden from view.

The story that developed is that nicotine infiltrates deep into the cell, entering a protein-making structure called the endoplasmic reticulum and increasing its output of the same nicotine receptors. These receptors then travel to the cell's surface. In other words, nicotine acts "inside out," directing actions that ultimately fuel and support the body's addiction to nicotine.

"That nicotine works 'inside out' was a surprise a few years ago," says Lester. "We originally thought that nicotine acted only from the outside in, and that a cascade of effects trickled down to the endoplasmic reticulum and the cell's nucleus, slowly changing their function."

In a new research review paper, published in Biological Psychiatry, Lester—along with senior research fellow Julie M. Miwa and postdoctoral scholar Rahul Srinivasan—proposes that psychiatric medications may work in the same "inside-out" fashion—and that this process explains how it takes weeks rather than hours or days for patients to feel the full effect of such drugs.

"We've known what happens within minutes and hours after a person takes Prozac, for example," explains Lester. "The drug binds to serotonin uptake proteins on the cell surface, and prevents the neurotransmitter serotonin from being reabsorbed by the cell. That's why we call Prozac a selective serotonin reuptake inhibitor, or SSRI." While the new hypothesis preserves that idea, it also presents several arguments for the idea that the drugs also enter into the bodies of the nerve cells themselves.

There, the drugs would enter the endoplasmic reticulum similarly to nicotine and then bind to the serotonin uptake proteins as they are being synthesized. The result, Lester hypothesizes, is a collection of events within neurons that his team calls "pharmacological chaperoning, matchmaking, escorting, and abduction." These actions—such as providing more stability for various proteins—could improve the function of those cells, leading to therapeutic effects in the patient. But those beneficial effects would occur only after the nerve cells have had time to make their intracellular changes and to transport those changes to the ends of axons and dendrites.

"These 'inside-out' hypotheses explain two previously mysterious actions," says Lester. "On the one hand, the ideas explain the long time required for the beneficial actions of SSRIs and antischizophrenic drugs. But on the other hand, newer, and very experimental, antidepressants act within hours. Binding within the endoplasmic reticulum of dendrites, rather than near the nucleus, might underlie those actions."

Lester and his colleagues first became interested in nicotine's effects on neural disorders because of a striking statistic: a long-term tobacco user has a roughly twofold lower chance of developing Parkinson's disease. Because there is no medical justification for using tobacco, Lester's group wanted more information about this inadvertent beneficial action of nicotine. They knew that stresses on the endoplasmic reticulum, if continued for years, could harm a cell. Earlier this year, they reported that nicotine's "inside-out" action appears to reduce endoplasmic reticulum stress, which could prevent or forestall the onset of Parkinson's disease.

Lester hopes to test the details of "inside-out" hypotheses for psychiatric medication. First steps would include investigating the extent to which psychiatric drugs enter cells and bind to their nascent receptors in the endoplasmic reticulum. The major challenge is to discover which other proteins and genes, in addition to the targets, participate in "matchmaking, escorting, and abduction."

"Present-day psychiatric drugs have a lot of room for improvement," says Lester. "Systematic research to produce better psychiatric drugs has been hampered by our ignorance of how they work. If the hypotheses are proven and the intermediate steps clarified, it may become possible to generate better medications."

 

Writer: 
Caltech Communications
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news