A New Tool for Secret Agents—And the Rest of Us

Caltech engineers make tiny, low-cost, terahertz imager chip

PASADENA, Calif.—A secret agent is racing against time. He knows a bomb is nearby. He rounds a corner, spots a pile of suspicious boxes in the alleyway, and pulls out his cell phone. As he scans it over the packages, their contents appear onscreen. In the nick of time, his handy smartphone application reveals an explosive device, and the agent saves the day. 

Sound far-fetched? In fact it is a real possibility, thanks to tiny inexpensive silicon microchips developed by a pair of electrical engineers at the California Institute of Technology (Caltech). The chips generate and radiate high-frequency electromagnetic waves, called terahertz (THz) waves, that fall into a largely untapped region of the electromagnetic spectrum—between microwaves and far-infrared radiation—and that can penetrate a host of materials without the ionizing damage of X-rays. 

When incorporated into handheld devices, the new microchips could enable a broad range of applications in fields ranging from homeland security to wireless communications to health care, and even touchless gaming. In the future, the technology may lead to noninvasive cancer diagnosis, among other applications.

"Using the same low-cost, integrated-circuit technology that's used to make the microchips found in our cell phones and notepads today, we have made a silicon chip that can operate at nearly 300 times their speed," says Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering at Caltech. "These chips will enable a new generation of extremely versatile sensors." 

Hajimiri and postdoctoral scholar Kaushik Sengupta (PhD '12) describe the work in the December issue of IEEE Journal of Solid-State Circuits

Researchers have long touted the potential of the terahertz frequency range, from 0.3 to 3 THz, for scanning and imaging. Such electromagnetic waves can easily penetrate packaging materials and render image details in high resolution, and can also detect the chemical fingerprints of pharmaceutical drugs, biological weapons, or illegal drugs or explosives. However, most existing terahertz systems involve bulky and expensive laser setups that sometimes require exceptionally low temperatures. The potential of terahertz imaging and scanning has gone untapped because of the lack of compact, low-cost technology that can operate in the frequency range.

To finally realize the promise of terahertz waves, Hajimiri and Sengupta used complementary metal-oxide semiconductor, or CMOS, technology, which is commonly used to make the microchips in everyday electronic devices, to design silicon chips with fully integrated functionalities and that operate at terahertz frequencies—but fit on a fingertip.

"This extraordinary level of creativity, which has enabled imaging in the terahertz frequency range, is very much in line with Caltech's long tradition of innovation in the area of CMOS technology," says Ares Rosakis, chair of Caltech's Division of Engineering and Applied Science. "Caltech engineers, like Ali Hajimiri, truly work in an interdisciplinary way to push the boundaries of what is possible."

The new chips boast signals more than a thousand times stronger than existing approaches, and emanate terahertz signals that can be dynamically programmed to point in a specified direction, making them the world's first integrated terahertz scanning arrays.

Using the scanner, the researchers can reveal a razor blade hidden within a piece of plastic, for example, or determine the fat content of chicken tissue. "We are not just talking about a potential. We have actually demonstrated that this works," says Hajimiri. "The first time we saw the actual images, it took our breath away." 

Hajimiri and Sengupta had to overcome multiple hurdles to translate CMOS technology into workable terahertz chips—including the fact that silicon chips are simply not designed to operate at terahertz frequencies. In fact, every transistor has a frequency, known as the cut-off frequency, above which it fails to amplify a signal—and no standard transistors can amplify signals in the terahertz range. 

To work around the cut-off-frequency problem, the researchers harnessed the collective strength of many transistors operating in unison. If multiple elements are operated at the right times at the right frequencies, their power can be combined, boosting the strength of the collective signal. 

"We came up with a way of operating transistors above their cut-off frequencies," explains Sengupta. "We are about 40 or 50 percent above the cut-off frequencies, and yet we are able to generate a lot of power and detect it because of our novel methodologies."

"Traditionally, people have tried to make these technologies work at very high frequencies, with large elements producing the power. Think of these as elephants," says Hajimiri. "Nowadays we can make a very large number of transistors that individually are not very powerful, but when combined and working in unison, can do a lot more. If these elements are synchronized—like an army of ants—they can do everything that the elephant does and then some."

The researchers also figured out how to radiate, or transmit, the terahertz signal once it has been produced. At such high frequencies, a wire cannot be used, and traditional antennas at the microchip scale are inefficient. What they came up with instead was a way to turn the whole silicon chip into an antenna. Again, they went with a distributed approach, incorporating many small metal segments onto the chip that can all be operated at a certain time and strength to radiate the signal en masse.

"We had to take a step back and ask, 'Can we do this in a different way?'" says Sengupta. "Our chips are an example of the kind of innovations that can be unearthed if we blur the partitions between traditional ways of thinking about integrated circuits, electromagnetics, antennae, and the applied sciences. It is a holistic solution."

 The paper is titled "A 0.28 THz Power-Generation and Beam-Steering Array in CMOS Based on Distributed Active Radiators." IBM helped with chip fabrication for this work.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Point of Light

Caltech engineers invent light-focusing device that may lead to applications in computing, communications, and imaging

PASADENA, Calif.—As technology advances, it tends to shrink. From cell phones to laptops—powered by increasingly faster and tinier processors—everything is getting thinner and sleeker. And now light beams are getting smaller, too.

Engineers at the California Institute of Technology (Caltech) have created a device that can focus light into a point just a few nanometers (billionths of a meter) across—an achievement they say may lead to next-generation applications in computing, communications, and imaging.

Because light can carry greater amounts of data more efficiently than electrical signals traveling through copper wires, today's technology is increasingly based on optics. The world is already connected by thousands of miles of optical-fiber cables that deliver email, images, and the latest video gone viral to your laptop.

As we all produce and consume more data, computers and communication networks must be able to handle the deluge of information. Focusing light into tinier spaces can squeeze more data through optical fibers and increase bandwidth. Moreover, by being able to control light at such small scales, optical devices can also be made more compact, requiring less energy to power them.

But focusing light to such minute scales is inherently difficult. Once you reach sizes smaller than the wavelength of light—a few hundred nanometers in the case of visible light—you reach what's called the diffraction limit, and it's physically impossible to focus the light any further.

But now the Caltech researchers, co-led by assistant professor of electrical engineering Hyuck Choo, have built a new kind of waveguide—a tunnellike device that channels light—that gets around this natural limit. The waveguide, which is described in a recent issue of the journal Nature Photonics, is made of amorphous silicon dioxide—which is similar to common glass—and is covered in a thin layer of gold. Just under two microns long, the device is a rectangular box that tapers to a point at one end.

As light is sent through the waveguide, the photons interact with electrons at the interface between the gold and the silicon dioxide. Those electrons oscillate, and the oscillations propagate along the device as waves—similarly to how vibrations of air molecules travel as sound waves. Because the electron oscillations are directly coupled with the light, they carry the same information and properties—and they therefore serve as a proxy for the light.

Instead of focusing the light alone—which is impossible due to the diffraction limit—the new device focuses these coupled electron oscillations, called surface plasmon polaritons (SPPs). The SPPs travel through the waveguide and are focused as they go through the pointy end.

Because the new device is built on a semiconductor chip with standard nanofabrication techniques, says Choo, the co-lead and the co-corresponding author of the paper, it is easy integrate with today's technology

Previous on-chip nanofocusing devices were only able to focus light into a narrow line. They also were inefficient, typically focusing only a few percent of the incident photons, with the majority absorbed and scattered as they traveled through the devices.

With the new device, light can ultimately be focused in three dimensions, producing a point a few nanometers across, and using half of the light that's sent through, Choo says. (Focusing the light into a slightly bigger spot, 14 by 80 nanometers in size, boosts the efficiency to 70 percent). The key feature behind the device's focusing ability and efficiency, he says, is its unique design and shape.

"Our new device is based on fundamental research, but we hope it's a good building block for many potentially revolutionary engineering applications," says Myung-Ki Kim, a postdoctoral scholar and the other lead author of the paper.

For example, one application is to turn this nanofocusing device into an efficient, high-resolution biological-imaging instrument, Kim says. A biologist can dye specific molecules in a cell with fluorescent proteins that glow when struck by light. Using the new device, a scientist can focus light into the cell, causing the fluorescent proteins to shine. Because the device concentrates light into such a small point, it can create a high-resolution map of those dyed molecules. Light can also travel in the reverse direction through the nanofocuser: by collecting light through the narrow point, the device turns into a high-resolution microscope. 

The device can also lead to computer hard drives that hold more memory via heat-assisted magnetic recording. Normal hard drives consist of rows of tiny magnets whose north and south poles lay end to end. Data is recorded by applying a magnetic field to switch the polarity of the magnets.

Smaller magnets would allow more memory to be squeezed into a disc of a given size. But the polarities of smaller magnets made of current materials are unstable at room temperature, causing the magnetic poles to spontaneously flip—and for data to be lost. Instead, more stable materials can be used—but those require heat to record data. The heat makes the magnets more susceptible to polarity reversals. Therefore, to write data, a laser is needed to heat the individual magnets, allowing a surrounding magnetic field to flip their polarities.

Today's technology, however, can't focus a laser into a beam that is narrow enough to individually heat such tiny magnets. Indeed, current lasers can only concentrate a beam to an area 300 nanometers wide, which would heat the target magnet as well as adjacent ones—possibly spoiling other recorded data.

Because the new device can focus light down to such small scales, it can heat smaller magnets individually, making it possible for hard drives to pack more magnets and therefore more memory. With current technology, discs can't hold more than 1 terabyte (1,000 gigabytes) per square inch. A nanofocusing device, Choo says, can bump that to 50 terabytes per square inch.

Then there's the myriad of data-transfer and communication applications, the researchers say. As computing becomes increasingly reliant on optics, devices that concentrate and control data-carrying light at the nanoscale will be essential—and ubiquitous, says Choo, who is a member of the Kavli Nanoscience Institute at Caltech. "Don't be surprised if you see a similar kind of device inside a computer you may someday buy."

The next step is to optimize the design and to begin building imaging instruments and sensors, Choo says. The device is versatile enough that relatively simple modifications could allow it to be used for imaging, computing, or communication.

The title of the Nature Photonics paper is "Nanofocusing in a metal-insulator-metal gap plasmon waveguide with a three-dimensional linear taper." In addition to Choo and Kim, the other authors are Matteo Staffaroni, Tae Joon Seok, Jeffrey Bokor, Ming C. Wu, and Eli Yablonovitch of UC Berkeley and Stefano Cabrini and P. James Schuck of the Molecular Foundry at Lawrence Berkeley National Lab. The research was funded by the Defense Advanced Research Projects Agency (DARPA) Science and Technology Surface-Enhanced Raman Spectroscopy program, the Department of Energy, and the Division of Engineering and Applied Science at Caltech.

 

 

 

 

 

 

 

 

 

 

 

 

This video shows the final fabrication step of the nanofocusing device. A stream of high-energy gallium ions blasts away unwanted layers of gold and silicon dioxide to carve out the shape of the device.

 

 

Writer: 
Marcus Woo
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

3-D Dentistry

A Caltech imaging innovation will ease your trip to the dentist and may soon energize home entertainment systems too.

Although dentistry has come a long way since the time when decayed teeth were extracted by brute force, most dentists are still using the clumsy, time-consuming, and imperfect impression method when making crowns or bridges. But that process could soon go the way of general anesthesia in family dentistry thanks to a 3-D imaging device developed by Mory Gharib, Caltech vice provost and Hans W. Liepmann Professor of Aeronautics and professor of bioinspired engineering.

By the mid-2000s, complex dental imaging machines—also called dental scanners—began appearing on the market. The devices take pictures of teeth that can be used to create crowns and bridges via computer-aided design/computer-aided manufacturing (CAD/CAM) techniques, giving the patient a new tooth the same day. But efficiency doesn't come without cost—and at more than $100,000 for an entire system, few dentists can afford to invest in the equipment. Within that challenge, Gharib saw an opportunity.

An expert in biomedical engineering, Gharib had built a 3-D microscope in 2006 to help him design better artificial heart valves and other devices for medical applications. Since it's not very practical to view someone's mouth through a microscope, he thought that he could design and build an affordable and portable 3-D camera that would do the same job as the expensive dental scanners.

The system he came up with is surprisingly simple. The camera, which fits into a handheld device, has three apertures that take a picture of the tooth at the same time but from different angles. The three images are then blended together using a computer algorithm to construct a 3-D image. In 2009, Gharib formed a company called Arges Imaging to commercialize the product; last year, Arges was acquired by a multinational dental-technology manufacturer that has been testing the camera with dentists.

"Professor Gharib is as brilliant a scientist as he is an engineer and inventor," says Ares Rosakis, chair of Caltech's division of engineering and applied science. "I think that's what we have to do to look at humanity's big problems: we have to be ready to act as pure scientists when we observe and discover as well as act as practical engineers when we invent and apply. This continuous interplay happens at Caltech better than at other institutions."

Indeed, Gharib did not stop with dental applications for his 3-D scanner, but quickly realized that the technology had promise in other industries. For example, there are many potential applications in consumer electronics and other products, he says. While motion-sensing devices with facial and voice-recognition capabilities, like Microsoft's Kinect for the Xbox 360, allow players to feel like they are in the game— running, jumping, and flying over obstacles—"the gestures required are extreme," says Gharib. A more sophisticated imager could make players really feel like they are part of the action.

In robotic and reconstructive surgery, a 3-D imager could provide surgeons with a tool to help them achieve better accuracy and precision. "What if I could take a 3-D picture of your head and have a machine sculpt it into a bust?" says Gharib. "With CAD/CAM, you can take a computer design and turn that into a sculpture, but you need someone who is expert at programming. What if a camera could take a photo and give you 3-D perspective? We have expensive 3-D motion-picture cameras now and 3-D displays, but we don't have much media for them," says Gharib, who earlier this year formed a new company called Apertura Imaging to try to improve the 3-D imaging technology for these nondental applications. "Once we build this new camera, people will come up with all sorts of applications," he says.

Writer: 
Michael Rogers
Writer: 
Exclude from News Hub: 
No

One Metal Scoop, Slightly Used

It's a science fiction staple: human astronauts visiting an alien world find a derelict spacecraft sent there . . . by themselves. It has actually happened—once—in real life. On November 19, 1969, the Apollo 12 Lunar Excursion Module (LEM) touched down on the vast plains of the Oceanus Procellarum, or Ocean of Storms, less than 200 yards from the Surveyor 3 probe, which Caltech's Jet Propulsion Laboratory had sent to the moon two and a half years earlier. On November 20, after a day spent collecting rock samples, astronauts Charles Conrad and Alan Bean retrieved some parts from Surveyor as well, bringing them back to NASA engineers eager to learn what long-term exposure would do to electronic cabling and other delicate components. Among their souvenirs was the metal scoop that Caltech soils engineer Ronald Scott had used to verify that a moon landing could be made in the first place.

In 1960, when President Kennedy had announced his intention to put a man on the moon, almost the only thing we knew for sure about the lunar surface was that it wasn't made of green cheese. It was entirely reasonable to suppose that the moon's dark "seas" were in fact oceans of dust pulverized by eons of relentless meteor bombardment. Was the dust a few inches deep? Or was it bottomless, waiting to swallow up and drown the man with the temerity to take one small step on it?

JPL sent a series of robotic explorers to find out. The preliminary designs for the Surveyors, drawn up in 1960, included an onboard soil analyzer. Weight constraints led to its removal a year or so later, but by then, the soil scoop—roughly the size and shape of a clenched fist, with a one-by-two-inch trapdoor on its underside—and its retractable arm had already been built.

In 1963, Scott, then an associate professor of civil engineering at Caltech, proposed that the scoop and arm be reinstated as a soil-mechanics experiment. By outfitting the arm with strain gauges, he could measure the force exerted when the scoop's flat bottom was pressed into the soil and calculate the load it could bear. Simply landing the Surveyor was insufficient evidence of the soil's stability; JPL's robotic emissary weighed a measly 650 pounds, but the LEM would be more than 16 tons of hardware, fuel, and humanity.

Problem was, with the soil lab gone, the arm's mounting hardware had been eliminated as well. But Scott was persistent, and in the summer of 1966, JPL engineers found a work-around: the arm could replace the downward-looking approach camera. However, the arm's connections would have to be rebuilt to fit the camera mount, and Surveyor 3's April 1967 launch date was coming up fast. There wasn't enough time to build the strain gauges as well, but Scott had a Plan B—he would measure the current drawn by the arm's motors, and thus derive the force they were exerting.

Surveyor 3's time on the moon got off to a bumpy start. Literally. The small thrusters used to slow the final descent failed to shut off, and the lander bounded skyward again a moment after touchdown, soaring to a height of 35 feet and becoming the first spacecraft ever to depart (albeit unintentionally) from the moon. The thrusters were finally shut down from Earth on the second bounce, and Surveyor came to rest on the gently sloping wall of the small, shallow crater that now bears its name.

As principal investigator for the Soil Mechanics Surface Sampler—as it was known in NASA-speak—Scott shared a desk in the Space Flight Operations Facility (JPL's version of Mission Control) with his JPL counterpart, Floyd Roberson. The arm responded to Roberson's commands, but Plan B quickly went awry. "The condition of the spacecraft telemetry prevented making measurements of the motor currents," as Scott explained in the June 1967 issue of Caltech's Engineering & Science magazine. Now their only source of data was Surveyor's TV camera, which fortunately was mounted on the same side of the lander's triangular body. 

On to Plan C: They would let the scoop rest on the surface and then drive it into the soil until the motors stalled to see how deeply it would penetrate. As each new frame from the camera appeared on the control-room monitors, someone snapped a Polaroid. After measuring the scoop's movement between frames, Roberson would write down the commands for the scoop's next move—backwards, as the camera took pictures through a swiveling mirror—and hand them to the man at the next console, who double-checked them and passed them to the controller, who typed them in. The humans worked faster than the camera did, so they were essentially moving the arm in real time.

Scott and Roberson poked at the soil, dug trenches, tried to pick up pebbles, and generally carried on like kids at the beach for the rest of the day—the lunar day, that is, or two weeks on Earth. "The lunar soil is fine-grained material . . . similar to a dry terrestrial sand," Scott wrote in E&S, and it got denser as they dug. "The deepest trench was approximately seven inches, and the material at that depth was relatively firm compared to the surface." And with that, all systems were "go" for Apollo, at least as far as the soil was concerned.

When the sun finally set on Surveyor on May 3, Scott wrote in a later E&S article in 1970, "For no particular reason that I can recall, we tidily raised the surface sampler as high as it would go and moved it to the extreme right." Surveyor quietly froze to death in the cold lunar night, and Scott went back to his day job—which included being a member of the Apollo Soil Mechanics Team.

Scott was at Mission Control in Houston on July 20, 1969, when Neil Armstrong landed the Eagle in the ultimate check of Scott's math. He was there again on November 20, listening in as Apollo 12's Charles Conrad and Alan Bean "made their way to Surveyor and began poking around." He was astonished, he recalled in E&S, when "Conrad remarked casually that he had got the scoop." This was not in the plan: the astronauts' wire cutters were no match for the steel tape that retracted the arm. But, Conrad later told Scott, he'd put the cutters to the tape and given it an experimental twist. "To his surprise," Scott wrote, "the tape parted at a weld. All he needed to do to free the scoop was to snip through three aluminum supporting arms and some wires behind the first joint," which was possible only because "we had fortuitously left the sampler in its most elevated position. Astronauts in space suits cannot at present bend down."

Several weeks later, Scott returned to Houston, this time to the Lunar Receiving Laboratory to witness the opening of the "murky Teflon" double bag in which the scoop had been sealed. He and Roberson had emptied the scoop more than two years earlier, as part of their tidying up at the end of their lunar digging, but some moondust and a little grit had clung to it and made it back to Earth anyway. "If I had known I was going to see it again," he wrote, "I would have left the scoop completely packed with lunar soil."

Scott went on to design the soil scoop for NASA's Viking landers, which searched for life on Mars in the 1970s. He died in 2005, but Associate Professor of Civil and Mechanical Engineering José Andrade has picked up the shovel, as it were. Andrade is on an advisory panel for JPL's InSight mission to Mars, slated to launch in 2016. InSight's instruments include a heat-flow probe that will hammer itself some 10 to 15 feet into the martian soil. Scott would be pleased.

Writer: 
Douglas Smith
Writer: 
Exclude from News Hub: 
No

Seeing the World in a Grain of Sand

Watson Lecture Preview

José Andrade has got the dirt on dirt. An associate professor of civil and mechanical engineering at Caltech, Andrade will discuss how the actions of a few grains of sand can affect landslides, earthquakes, and even Mars rovers. He will be speaking at 8:00 p.m. on Wednesday, November 28, 2012, in Caltech's Beckman Auditorium. Admission is free.

 

Q: What do you do?

A: I study the behavior of granular materials. These can be many things: granular materials are the second-most-manipulated materials on Earth, after water. Some of the best examples are soils, like sand; rocks; and construction materials, like concrete. I make computational models that try to capture the behavior of these materials—for instance, to simulate landslides, or the beginning of an earthquake, or the interaction of a rover wheel and martian soil.

 

Q: Why is this cool?

A: It's cool because these materials are very complex, although they look very simple and innocent on a small scale. When you look at sand, the grains interacting with each other seem very simple. But their behavior as a bulk material is very complex. So even though at a fundamental level they may be governed by an innocent-looking equation—let's say F = ma, or force equals mass times acceleration—once they get together in a landslide it's not that simple. This dichotomy is really attractive to me, and leads to some really cool work in terms of modeling. You have millions of grains in an avalanche, so we want to keep the F = ma for each grain but somehow do the calculations at a coarser scale. The challenge is to capture the essence of the physics without the complexity of applying it to each grain in order to devise models that work at the landslide level. 

 

Q: How did you get into this line of work?

A: I took apart a lot of stuff as a kid, to the great displeasure of my parents. Doorknobs were a big specialty for me. I was fairly successful reassembling them, but I also used to take apart clocks. They were amazing devices, but most of the time I couldn't put them back together. And then I got into radios, and then TVs. I usually failed to put them back together to their pristine state, and that's where I used to get in trouble. One of my big hobbies was to open cassettes—remember them?—and make mix tapes. I was fairly successful cutting up and reassembling tapes and being able to play them.

In college I started out as a civil engineer, a structural engineer, but I developed a passion for mechanics, which is the relationship between forces and deformations. And in grad school I discovered my passion for geologic materials, for granular materials, and for modeling. But it was still geared toward engineering—I just went from thinking about what's on top to looking at the things underneath. And that's a cool, and at the same time a sad part about what we do: when we're doing a good job, nothing interesting happens. As one of my colleagues says, our best work is underneath the building, where nobody sees it. It's only when things fall apart . . .

It wasn't until I came to Caltech that I started to move toward the science side—landslides, planetary science, rovers crawling around. I never thought 10 years ago I'd be doing work on Mars. Not at all.

 

Named for the late Caltech professor Earnest C. Watson, who founded the series in 1922, the Watson Lectures present Caltech and JPL researchers describing their work to the public. Many past Watson Lectures are available online at Caltech's iTunes U site.

Writer: 
Douglas Smith
Listing Title: 
Watson Lecture: "Seeing the World in a Grain of Sand"
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Knowing When to Fold 'Em

Caltech engineers and an origami expert are joining forces to build a retinal implant to treat blindness

Electrical engineer Azita Emami is an expert in the 21st century technology of analog and digital circuits for computers, sensors, and other applications, so when she came to Caltech in 2007, she never imagined that she would be incorporating in her research an art form that originated centuries ago. But origami—the Japanese art of paper folding—could play a critical role in her project to design an artificial retina, which may one day help thousands of blind and visually impaired people regain their vision.

Retinal implants are designed to bypass the photoreceptors in the retina that have been damaged by diseases such as retinitis pigmentosa (RP) and age-related macular degeneration (AMD). About four years before Emami arrived at the Institute, Caltech investigators began working on a retinal implant through USC's Biomimetic Microelectronic Systems–Engineering Research Center, funded by the National Science Foundation (NSF). The basic idea is to use a miniature camera mounted on a pair of eyeglasses to capture images, then process the images and send the digital information wirelessly to an implantable microchip. The microchip generates electrical currents for stimulation, and a tiny cable carries the currents to an electrode array attached to the patient's retina. The electrodes stimulate cells in the eye, which transmit signals through the optic nerve to the part of the brain that creates a picture.

The center's director, Mark Humayun, an ophthalmologist at USC's Doheny Eye Institute and a pioneer in artificial-retina surgery, has implanted such a device in several completely blind patients suffering from end-stage RP, restoring some of their vision. The 60-electrode array allows these patients to see light as well as low-resolution representations of objects and enlarged letters.

Hundreds of thousands of people who suffer from AMD, however, are able to see at least that much on their own and thus would derive no benefit from the array. To create an artificial retina that could help these people, Humayun needed a better chip and an array that had more electrodes to stimulate more cells in the eye. At the suggestion of Caltech professor of electrical engineering and mechanical engineering Yu-Chong Tai, who had worked with Humayun on packaging and integration of the retinal implants, Emami, an assistant professor of electrical engineering and an expert in building ultralow-power circuits, joined the team to focus on the next generation retinal implants.

Emami's lab recently developed just such a chip, which supports 512 electrodes and is extendable to 1024 electrodes if two chips are used. The chip has wireless capabilities for power and data telemetry and can fit inside the eyeball, eliminating the need for the infection-prone cable used in the earlier system. The design also features many novel techniques for reducing size and power. Reducing the power consumption is critical for wireless power delivery and to avoid tissue damage due to the heat generated by the chip. Humayun will soon test the chip on subjects to see exactly how much of their vision is restored.

But even that electrode-rich array won't solve two of the biggest challenges of the technology: creating a device that requires only a minimally invasive incision to implant, and one that also conforms to the shape of the eye. The original electrode array was mounted on a relatively flat substrate that required a large surgical incision for implantation. It could only be tacked onto one spot on the retina to avoid damaging the neurons—which meant that it pulled away at the loose end. And that also meant that some of the electrodes would be completely ineffective while others needed a greater current from the chip to properly stimulate retinal cells, leading to high power consumption. 

Emami, Humayun, and Tai realized that a flexible substrate that could be folded up, origami-style, before implantation and then opened up to a curved shape once inside would need only a minimally invasive incision to be slid into place. Instead of one large chip, many smaller chips distributed over the substrate and between the folds would remove the need for the cable and lead to better reliability and lower cost, Emami says. With a system that conformed to the curve of the eye, the location of the chips and the electrodes could be optimized through the design of the origami structure, precisely matching the parts of the eye to be stimulated.

To create such a design, Emami recruited Caltech alum Robert Lang (BS '82, PhD '86), one of the world's leading origami experts. Lang, who has practiced origami for more than 40 years, is known for developing mathematical equations to enable the construction of highly complex origami designs. Over the summer, Emami received an NSF grant to build the first prototype of an origami implant that will fit inside the eye and match the contour of the retina. 

"I'm used to working with paper that starts out as no smaller than two inches square," Lang says. This new creation, however, will be less than one quarter that size, will be made out of plastic, and will have to deploy perfectly after surgical implantation.

Assisting Lang in the design is Sergio Pellegrino, the Joyce and Kent Kresa Professor of Aeronautics and professor of civil engineering and a senior research scientist at JPL. Pellegrino is an expert at developing origami-like structures, but on a giant scale: he devises lightweight expandable structures for use on spacecraft—such as foldable booms that serve as antenna and deployable masts.

The ability to translate these sorts of very large designs to something that can be unobtrusively inserted and then unfolded in the eye "is a matter of scaling, and that's an engineering principle. It is what engineers do," says Ares Rosakis, chair of the division of engineering and applied science. "The difference is that at Caltech we also invent and scale our own inventions: we invent something for X and we use it for Y. So someone like Pellegrino can invent something for space and then have fantastic successes by scaling it for use in medical engineering."

While Pellegrino and Lang work on the origami, Emami will continue working on the chip. By the end of next year they hope to show in animal models that an origami substrate can be inserted inside the eye, unfolded, and held in place by either retinal tacks or a less invasive method, also using origami. Soon after, they hope to have a foldable artificial retina that can be tested on a patient.

Once perfected, Emami thinks that the new retinal implant technology could be applied to other medical applications, such as neural implants that are being developed to help paralyzed people regain movement. "Our origami approach is fundamentally different and can lead to a new area in engineering with a great impact for neuroscience and biomedical devices," Emami says. "We may be able to benefit many people."

Writer: 
Mike Rogers
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Nano Insights Could Lead to Improved Nuclear Reactors

Caltech researchers examine self-healing abilities of some materials

PASADENA, Calif.—In order to build the next generation of nuclear reactors, materials scientists are trying to unlock the secrets of certain materials that are radiation-damage tolerant. Now researchers at the California Institute of Technology (Caltech) have brought new understanding to one of those secrets—how the interfaces between two carefully selected metals can absorb, or heal, radiation damage.

"When it comes to selecting proper structural materials for advanced nuclear reactors, it is crucial that we understand radiation damage and its effects on materials properties. And we need to study these effects on isolated small-scale features," says Julia R. Greer, an assistant professor of materials science and mechanics at Caltech. With that in mind, Greer and colleagues from Caltech, Sandia National Laboratories, UC Berkeley, and Los Alamos National Laboratory have taken a closer look at radiation-induced damage, zooming in all the way to the nanoscale—where lengths are measured in billionths of meters. Their results appear online in the journals Advanced Functional Materials and Small.

During nuclear irradiation, energetic particles like neutrons and ions displace atoms from their regular lattice sites within the metals that make up a reactor, setting off cascades of collisions that ultimately damage materials such as steel. One of the byproducts of this process is the formation of helium bubbles. Since helium does not dissolve within solid materials, it forms pressurized gas bubbles that can coalesce, making the material porous, brittle, and therefore susceptible to breakage.  

Some nano-engineered materials are able to resist such damage and may, for example, prevent helium bubbles from coalescing into larger voids. For instance, some metallic nanolaminates—materials made up of extremely thin alternating layers of different metals—are able to absorb various types of radiation-induced defects at the interfaces between the layers because of the mismatch that exists between their crystal structures.

"People have an idea, from computations, of what the interfaces as a whole may be doing, and they know from experiments what their combined global effect is. What they don't know is what exactly one individual interface is doing and what specific role the nanoscale dimensions play," says Greer. "And that's what we were able to investigate."

Peri Landau and Guo Qiang, both postdoctoral scholars in Greer's lab at the time of this study, used a chemical procedure called electroplating to either grow miniature pillars of pure copper or pillars containing exactly one interface—in which an iron crystal sits atop a copper crystal. Then, working with partners at Sandia and Los Alamos, in order to replicate the effect of helium irradiation, they implanted those nanopillars with helium ions, both directly at the interface and, in separate experiments, throughout the pillar.

The researchers then used a one-of-a-kind nanomechanical testing instrument, called the SEMentor, which is located in the subbasement of the W. M. Keck Engineering Laboratories building at Caltech, to both compress the tiny pillars and pull on them as a way to learn about the mechanical properties of the pillars—how their length changed when a certain stress was applied, and where they broke, for example. 

"These experiments are very, very delicate," Landau says. "If you think about it, each one of the pillars—which are only 100 nanometers wide and about 700 nanometers long—is a thousand times thinner than a single strand of hair. We can only see them with high-resolution microscopes."

The team found that once they inserted a small amount of helium into a pillar at the interface between the iron and copper crystals, the pillar's strength increased by more than 60 percent compared to a pillar without helium. That much was expected, Landau explains, because "irradiation hardening is a well-known phenomenon in bulk materials." However, she notes, such hardening is typically linked with embrittlement, "and we do not want materials to be brittle."

Surprisingly, the researchers found that in their nanopillars, the increase in strength did not come along with embrittlement, either when the helium was implanted at the interface, or when it was distributed more broadly. Indeed, Greer and her team found, the material was able to maintain its ductility because the interface itself was able to deform gradually under stress.

This means that in a metallic nanolaminate material, small helium bubbles are able to migrate to an interface, which is never more than a few tens of nanometers away, essentially healing the material. "What we're showing is that it doesn't matter if the bubble is within the interface or uniformly distributed—the pillars don't ever fail in a catastrophic, abrupt fashion," Greer says. She notes that the implanted helium bubbles—which are described in the Advanced Functional Materials paper—were one to two nanometers in diameter; in future studies, the group will repeat the experiment with larger bubbles at higher temperatures in order to represent additional conditions related to radiation damage.

In the Small paper, the researchers showed that even nanopillars made entirely of copper, with no layering of metals, exhibited irradiation-induced hardening. That stands in stark contrast to the results from previous work by other researchers on proton-irradiated copper nanopillars, which exhibited the same strengths as those that had not been irradiated. Greer says that this points to the need to evaluate different types of irradiation-induced defects at the nanoscale, because they may not all have the same effects on materials.

While no one is likely to be building nuclear reactors out of nanopillars anytime soon, Greer argues that it is important to understand how individual interfaces and nanostructures behave. "This work is basically teaching us what gives materials the ability to heal radiation damage—what tolerances they have and how to design them," she says. That information can be incorporated into future models of material behavior that can help with the design of new materials.

Along with Greer, Landau, and Qiang, Khalid Hattar of Sandia National Laboratories is also a coauthor on the paper "The Effect of He Implantation on the Tensile Properties and Microstructure of Cu/Fe Nano-bicrystals," which appears online in Advanced Functional Materials. Peter Hosemann of UC Berkeley and Yongqiang Wang of Los Alamos National Laboratory are coauthors on the paper "Helium Implantation Effects on the Compressive Response of Cu Nanopillars," which appears online in the journal Small. The work was supported by the U.S. Department of Energy and carried out, in part, in the Kavli Nanoscience Institute at Caltech.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Diving Into the Unknown: An Interview with Andrei Faraon

This fall, Andrei Faraon (BS '04) returned to his alma mater to take a position as an assistant professor of applied physics and materials science. Faraon, originally from Falticeni, Romania, came to the United States in 2001 to study at Caltech and earned his BS in physics in 2004, then moved to Stanford University, where he received a master's degree in electrical engineering and a PhD in applied physics. Faraon recently answered some questions about his work and returning to Caltech. 

First of all, how does it feel to be back at Caltech?

It feels great—something like a homecoming.

What is the focus of your research?

I build devices that are based on the fundamentals of light–matter interaction. What we're trying to do is manipulate single quantum systems in solids—systems like single atoms or single quantum dots—using light. Light is great for this purpose because it allows us to address these systems without destroying their fragile quantum states, and because it can easily interconnect quantum systems over large distances.

This work has applications in quantum and classical information technologies, and also for the development of sensors with very high spatial resolution and very high sensitivity. These sensors are used to probe other quantum systems and also have applications in biotechnology.

Does this have anything to do with the development of quantum computers?

The quantum computer is something of a Holy Grail, but down the road there is hope that other applications, like quantum repeaters that could enable very secure communications, will come out of these new technologies. In general, the field is just trying to get an understanding of how to better control and manipulate quantum systems in order to develop devices based on these quantum concepts.

What has been your most recent development in this line of work?

In my postdoctoral work, I was able to combine some impurities in diamond, called nitrogen vacancy centers, with optical structures known as nanoscale optical resonators. Nitrogen vacancy centers are defects in the atomic lattice that makes up diamond in which nitrogen atoms have basically replaced carbon atoms. They are interesting because they have very good quantum-coherence properties—meaning that you can actually store information in the quantum state of the impurities and keep it preserved for a relatively long time. These impurities can be used for the sensing of electromagnetic fields with high resolution, or to store and process quantum information. By combining the impurities with optical structures we are actually able to better control and modify their properties.

In general, light interacts weakly with these impurities. By embedding the nitrogen-vacancy centers in resonators, we can create a stronger interaction between them and the light field. The resonators can be further integrated in an on-chip optical network. Since the impurities are coupled to the resonators, we actually interconnect multiple nitrogen-vacancy centers on a chip, thus creating a quantum network that forms the basis of future devices for quantum information processing.

What do you find most exciting about your research?

It is really at the forefront of experimental research and it allows me to really dive into the unknown. I love the fact that we often discover unexpected things and that there is also great potential that this work will result in revolutionary technologies.

What brought you back to Caltech?

Caltech provides the best environment in which to do my research in terms of facilities, the quality of students, and the faculty that I can interact with. Caltech has a very strong effort both in photonics, which is my field of study, and also in quantum information. I think that the people are actually the greatest resource that Caltech has. 

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech Mourns the Passing of David G. Goodwin

1957–2012

David G. Goodwin, professor of mechanical engineering and applied physics, emeritus, passed away at his home in Pasadena on Sunday, November 11, after a five-year battle with brain cancer and a struggle with Parkinson's disease that began in 1998. He was 55 years old. Born on October 15, 1957, Goodwin grew up in Rancho Cordova, a suburb of Sacramento, near the Aerojet plant where his father worked as an engineer. He came to Caltech in 1988 as an assistant professor of mechanical engineering, was promoted to associate professor of mechanical engineering and applied physics in 1993 and professor in 2000, and retired in 2011.

Goodwin was best known for developing ways to grow thin films of high-purity diamond. Diamond films—transparent, scratch-resistant, and efficient dissipaters of the heat generated by high-powered computer chips—are now routinely used to protect electronic and optical components, and diamond-coated drill bits can be found at any hardware store.

But the diamond work was just one facet of Goodwin's research. According to longtime collaborator David Boyd, once a postdoc of Goodwin's and now a Caltech staff member, "Dave's real passion was modeling. He felt that he never fully understood something unless he could model it. He had a keen insight into how things work. He would proffer an oftentimes very simple explanation that captured the essential physics, and was able to see how that applied in engineering terms. It's really unusual for an engineer to know that much physics, or a physicist to have that much engineering."

The Mideast oil crises of Goodwin's teenage years sparked a lifelong interest in energy issues, and much of his work revolved around the intricacies of combustion. He fluently translated the complex interplay of heat flow and atomic behavior within swirling mixtures of turbulent gases into detailed mathematical models that accurately predicted how real-world, industrial-scale chemical processors would operate.

After earning his BS in engineering from Harvey Mudd College in 1979, Goodwin joined the Stanford University High Temperature Gasdynamics Laboratory, which was working on an ultraefficient method for generating electricity by burning coal at very high temperatures to create an electrically charged plasma. The process proved too expensive to be practical, but the mastery Goodwin acquired of chemical kinetics—the mathematical descriptions of how reactions proceed—set the course of his career. He earned his MS and PhD in 1980 and 1986 respectively, both in mechanical engineering.

Goodwin arrived at Caltech amid an explosion of interest in growing diamond coatings via chemical vapor deposition. The process is high-tech, but the basic idea is simple. Playing a methane flame over an object deposits carbon atoms on its surface, and under the right conditions these atoms will organize themselves into a sheen of high-purity diamond instead of the usual smudge of soot. "People had found a process that worked," says Boyd, "but really did not know how or why it did." Goodwin's models explained it all, and the set of papers he published beginning in 1990 "really turned artificial diamond into an engineering material," says Harry Atwater, the Hughes Professor and professor of applied physics and materials science, and director of the Resnick Sustainability Institute.

But far beyond that, "Dave was one of these people whose impact you measure by the codes he wrote for others to use," Atwater says. Goodwin began writing code in earnest in the 1990s, when he led the Virtual Integrated Prototyping project for the Defense Advanced Research Projects Agency. This sprawling endeavor, on which Atwater was a collaborator, created a set of simulations that began at the atomic level and went up to encompass an entire chemical reactor in order to figure out how to grow superconducting metal oxides and other thin films with demanding atomic arrangements. Atwater and Goodwin then built the reactor, which is still in use at Caltech and whose design has been widely copied.

Along the way, Goodwin wrote an extensive overhaul of CHEMKIN (for "chemical kinetics"), a collection of programs that had been developed at Sandia National Laboratories in the 1970s and had quickly gone into worldwide use. He then wrote—from scratch—his own software toolkit for modeling basic thermodynamics and chemical kinetics, which he dubbed Cantera. Breaking with the usual practice of creating a convoluted descriptor to yield a clever acronym, Cantera doesn't stand for anything, says Professor of Mechanical Engineering Tim Colonius. "He just wanted to give it a nice soothing, relaxing name, like pharmaceutical companies do. That was typical of his sense of humor." The open-source code is available pro bono and has been downloaded 120,000 times since 2004, according to Sandia's Harry Moffat, one of Cantera's current developers and the manager of the website. Says Moffat, "We have ventured into areas that CHEMKIN cannot go, including liquid-solid interactions and electrochemical applications such as batteries."

Goodwin also found time to court Frances Teng, an obstetrician-gynecologist at nearby Huntington Hospital, whose own parents had gotten married while postdocs at Caltech in the 1960s. Dave and Frances were married at the Athenaeum, Caltech's faculty club, in April 1993.

Goodwin eventually returned to the energy issues that had motivated him to become an engineer in the first place. "He really pushed us to start teaching some energy-related courses in the early 2000s," says Vice Provost Melany Hunt, the Kenan Professor of Mechanical Engineering, and the executive officer for mechanical engineering at the time. This led to ME 122, Sustainable Energy Engineering, which Goodwin inaugurated in 2008. ME 122 lives on as the centerpiece of the Energy Science and Technology option, now renumbered EST/EE/ME 109 and renamed Energy: Supply and Demand.

During that time Goodwin also collaborated on three major fuel-cell projects with Sossina Haile, professor of materials science and chemical engineering, in which he modeled the processes by which fuel molecules reacted with oxygen ions to produce electricity. "Dave was looking at it from a computational perspective, and we were looking at it from an experimental perspective," says Haile. "He pulled together all that we know from fundamental physics and chemistry to say, 'This is how the fuel cell works, and this is how to configure it so that it will actually deliver the power that you want.' Most people do a lot of parameter fitting and approximations, but he treated the problem in a very physics-based, solid way."

Goodwin was as active in the greater life of the Institute as he was in his lab. He served on the faculty board from 1996 to 1999 and from 2001 to 2005, the last two years as faculty chair. During that time, he successfully lobbied to extend the timetable for granting junior faculty tenure in cases of childbirth or adoption, Hunt says. "Dave was always concerned about diversity issues. He would say, 'Are there women coming in? Are there minority students coming in? We should make sure that we are doing things to ensure that we have a diverse group coming in to Caltech.'" Hunt recalls that when two young women wanted to take a class that wasn't offered that year, "Dave met with them in his office three times a week. He wanted to be helpful. He just felt a responsibility to do it."

"The thing that was remarkable about David Goodwin," says Haile, "was that when he was diagnosed with this rare form of cancer for which there is no rhyme or reason, he said, 'I'm so glad that I lived my life in a healthy way and that I didn't do anything that caused this,' not 'I can't believe I lived my life in such a healthy way, and it's so unfair that I got struck by this.' It was stunning. He had an incredibly optimistic view."

"Dave made you happy whenever you ran into him," says Kaushik Bhattacharya, the Howell N. Tyson, Sr., Professor of Mechanics and professor of materials science, and executive officer for mechanical and civil engineering. "You could go into his office and have a wonderful conversation about any topic in the world. He had an easy smile and a wicked sense of humor."

Goodwin's honors include five years as a National Science Foundation Presidential Young Investigator and two NASA Certificates of Recognition for his diamond-film work. He was a member of the Electrochemical Society, the American Chemical Society, the Combustion Institute, the American Physical Society, the American Society of Mechanical Engineers, and the Materials Research Society. He wrote or coauthored more than 60 papers.

In his spare time, Goodwin was an accomplished guitarist, a skilled woodworker who made several pieces of furniture for the family's Craftsman house, and a prolific painter in oils.

Goodwin is survived by his parents, George and Verma Goodwin, of Cameron Park, California; his sisters, Ellen Goodwin Levy of Sacramento and Jennifer Goodwin Smith of Elk Grove; his wife, Frances Teng; and his children, Tim, 18, and Erica, 15.

A memorial service will be held on January 12, 2013, at 1:00 p.m. at the Caltech Athenaeum, and an annual speakership in mechanical engineering is being established in his honor; contributions to the David Goodwin Memorial Lectureship can be made here

Writer: 
Douglas Smith
Frontpage Title: 
David G. Goodwin, 1957–2012
Listing Title: 
David G. Goodwin, 1957–2012
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community
Tuesday, April 9, 2013
Avery Library

Spring Teaching Assistant Orientation

Pages

Subscribe to RSS - EAS