Friday, January 25, 2013

Course Ombudspeople Lunch

Faulty Behavior

New earthquake fault models show that "stable" zones may contribute to the generation of massive earthquakes

PASADENA, Calif.—In an earthquake, ground motion is the result of waves emitted when the two sides of a fault move—or slip—rapidly past each other, with an average relative speed of about three feet per second. Not all fault segments move so quickly, however—some slip slowly, through a process called creep, and are considered to be "stable," or not capable of hosting rapid earthquake-producing slip.  One common hypothesis suggests that such creeping fault behavior is persistent over time, with currently stable segments acting as barriers to fast-slipping, shake-producing earthquake ruptures. But a new study by researchers at the California Institute of Technology (Caltech) and the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) shows that this might not be true.

"What we have found, based on laboratory data about rock behavior, is that such supposedly stable segments can behave differently when an earthquake rupture penetrates into them. Instead of arresting the rupture as expected, they can actually join in and hence make earthquakes much larger than anticipated," says Nadia Lapusta, professor of mechanical engineering and geophysics at Caltech and coauthor of the study, published January 9 in the journal Nature.

She and her coauthor, Hiroyuki Noda, a scientist at JAMSTEC and previously a postdoctoral scholar at Caltech, hypothesize that this is what occurred in the 2011 magnitude 9.0 Tohoku-Oki earthquake, which was unexpectedly large.

Fault slip, whether fast or slow, results from the interaction between the stresses acting on the fault and friction, or the fault's resistance to slip. Both the local stress and the resistance to slip depend on a number of factors such as the behavior of fluids permeating the rocks in the earth's crust. So, the research team formulated fault models that incorporate laboratory-based knowledge of complex friction laws and fluid behavior, and developed computational procedures that allow the scientists to numerically simulate how those model faults will behave under stress.

"The uniqueness of our approach is that we aim to reproduce the entire range of observed fault behaviors—earthquake nucleation, dynamic rupture, postseismic slip, interseismic deformation, patterns of large earthquakes—within the same physical model; other approaches typically focus only on some of these phenomena," says Lapusta.

In addition to reproducing a range of behaviors in one model, the team also assigned realistic fault properties to the model faults, based on previous laboratory experiments on rock materials from an actual fault zone—the site of the well-studied 1999 magnitude 7.6 Chi-Chi earthquake in Taiwan.

"In that experimental work, rock materials from boreholes cutting through two different parts of the fault were studied, and their properties were found to be conceptually different," says Lapusta. "One of them had so-called velocity-weakening friction properties, characteristic of earthquake-producing fault segments, and the other one had velocity-strengthening friction, the kind that tends to produce stable creeping behavior under tectonic loading. However, these 'stable' samples were found to be much more susceptible to dynamic weakening during rapid earthquake-type motions, due to shear heating."

Lapusta and Noda used their modeling techniques to explore the consequences of having two fault segments with such lab-determined fault-property combinations. They found that the ostensibly stable area would indeed occasionally creep, and often stop seismic events, but not always. From time to time, dynamic rupture would penetrate that area in just the right way to activate dynamic weakening, resulting in massive slip. They believe that this is what happened in the Chi-Chi earthquake; indeed, the quake's largest slip occurred in what was believed to be the "stable" zone.

"We find that the model qualitatively reproduces the behavior of the 2011 magnitude 9.0 Tohoku-Oki earthquake as well, with the largest slip occurring in a place that may have been creeping before the event," says Lapusta. "All of this suggests that the underlying physical model, although based on lab measurements from a different fault, may be qualitatively valid for the area of the great Tohoku-Oki earthquake, giving us a glimpse into the mechanics and physics of that extraordinary event."

If creeping segments can participate in large earthquakes, it would mean that much larger events than seismologists currently anticipate in many areas of the world are possible. That means, Lapusta says, that the seismic hazard in those areas may need to be reevaluated.

For example, a creeping segment separates the southern and northern parts of California's San Andreas Fault. Seismic hazard assessments assume that this segment would stop an earthquake from propagating from one region to the other, limiting the scope of a San Andreas quake. However, the team's findings imply that a much larger event may be possible than is now anticipated—one that might involve both the Los Angeles and San Francisco metropolitan areas.

"Lapusta and Noda's realistic earthquake fault models are critical to our understanding of earthquakes—knowledge that is essential to reducing the potential catastrophic consequences of seismic hazards," says Ares Rosakis, chair of Caltech's division of engineering and applied science. "This work beautifully illustrates the way that fundamental, interdisciplinary research in the mechanics of seismology at Caltech is having a positive impact on society."

Now that they've been proven to qualitatively reproduce the behavior of the Tohoku-Oki quake, the models may be useful for exploring future earthquake scenarios in a given region, "including extreme events," says Lapusta. Such realistic fault models, she adds, may also be used to study how earthquakes may be affected by additional factors such as man-made disturbances resulting from geothermal energy harvesting and CO2 sequestration. "We plan to further develop the modeling to incorporate realistic fault geometries of specific well-instrumented regions, like Southern California and Japan, to better understand their seismic hazard."

"Creeping fault segments can turn from stable to destructive due to dynamic weakening" appears in the January 9 issue of the journal Nature. Funding for this research was provided by the National Science Foundation; the Southern California Earthquake Center; the Gordon and Betty Moore Foundation; and the Ministry of Education, Culture, Sports, Science and Technology in Japan.

Writer: 
Katie Neith
Frontpage Title: 
Faulty Behavior: “Stable” Zones May Contribute to Massive Earthquakes
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News
bbell2's picture

Mory Gharib Named NAI Charter Fellow

Caltech's Mory Gharib has been named a charter fellow of the National Academy of Inventors (NAI).

According to the NAI, election to fellow status is a "high professional distinction accorded to academic inventors who have demonstrated a highly prolific spirit of innovation in creating or facilitating outstanding inventions that have made a tangible impact on quality of life, economic development, and the welfare of society."

Gharib (PhD '83) is the Hans W. Liepmann Professor of Aeronautics and professor of bioinspired engineering at Caltech. He is also the Institute's vice provost for research. Gharib's research group at Caltech studies examples from the natural world—fins, wings, blood vessels, embryonic structures, and entire organisms—to gain inspiration for inventions that have practical uses in power generation, drug delivery, dentistry, and more. Gharib is responsible for more than 59 U.S. patents.

Gharib will be formally inducted as a charter fellow during the second annual conference of the National Academy of Inventors in Tampa, Florida, in February.

Academic inventors and innovators elected to the rank of NAI Charter Fellow were nominated by their peers "for outstanding contributions to innovation in areas such as patents and licensing, innovative discovery and technology, significant impact on society, and support and enhancement of innovation."

"The natural world serves as the inspiration for many of my inventions," Gharib says. "But it is also inspiring to have been selected as a charter fellow of the NAI and to be included in a group with so many other leading innovators."

Writer: 
Brian Bell
Images: 
Frontpage Title: 
Gharib Named NAI Charter Fellow
Listing Title: 
Gharib Named NAI Charter Fellow
Contact: 
Writer: 
Exclude from News Hub: 
Yes
bbell2's picture

Ares J. Rosakis to Receive P. S. Theocaris Award

The Society for Experimental Mechanics (SEM) will present the P. S. Theocaris Award for 2013 to Ares J. Rosakis, Caltech's Theodore von Kármán Professor of Aeronautics and professor of mechanical engineering, and chair of the Division of Engineering and Applied Science.

Given every two years, the award is named in honor of P. S. Theocaris, a legendary solid mechanics researcher, dynamic-fracture experimentalist, and past member of the prestigious Academy of Athens. The award recognizes recipients for distinguished, innovative, and outstanding work in optical methods and experimental mechanics. Rosakis will accept the award at SEM's annual conference in Lombard, Illinois, in June.

"Receiving this award is especially thrilling for me since I have had the honor of knowing Professor Pericles Theocaris since early childhood in Greece," says Rosakis. "As a high school student I was greatly inspired by his pioneering work on dynamic-fracture mechanics, high-speed photography, and the optical method of caustics, subjects which I still hold very dear to my heart. Indeed I consider his influence as playing a very important role during the first formative steps of my engineering and scientific identity."

Rosakis researches quasi-static and dynamic failure of metals, composites, and interfaces using high-speed visible and infrared diagnostics and laser interferometry. Recent research conducted by Rosakis combined engineering fracture mechanics and geophysics to gain a better understanding of the destructive potential of large earthquakes.

SEM specifically recognized Rosakis for his experimental discovery of "intersonic" or "supershear" ruptures or dynamic delamination cracks. These ruptures are capable of propagating at speeds that are faster than the shear wave speeds of the surrounding material, and can spread along fault planes in the earth's crust to produce supershear earthquakes. These ruptures also grow along weak interfaces in a variety of composite materials commonly used in engineering practice. SEM also is recognizing Rosakis for his seminal contributions in the area of dynamic failure and for developing methods to determine stresses in thin-film structures.

Rosakis received his BSc from the University of Oxford in 1978 and his PhD from Brown University in 1982, the same year he joined Caltech as an assistant professor. He was appointed associate professor in 1988 and professor in 1993 and was named von Kármán Professor in 2004. He also became the fifth director of the Graduate Aeronautical (Aerospace as of 2008) Laboratories at the California Institute of Technology (GALCIT) in 2004 and held that position through 2009, the year he was appointed division chair.

Rosakis holds 13 U.S. patents on thin-film stress measurement and in situ wafer-level metrology as well as on high-speed infrared thermography. He is the author of more than 260 papers on the dynamic deformation and catastrophic failure of metals, composites, interfaces, and on laboratory seismology. He is a member of the National Academy of Engineering and a fellow of the American Academy of Arts and Sciences. He recently received the commander grade of the French Republic's Order of Academic Palms.

Writer: 
Brian Bell
Images: 
Frontpage Title: 
Rosakis Wins SEM Award
Listing Title: 
Rosakis Wins SEM Award
Contact: 
Writer: 
Exclude from News Hub: 
Yes
bbell2's picture

Hans Hornung Awarded Honorary Doctorate

Caltech professor emeritus Hans G. Hornung received an honorary doctorate from the Swiss Federal Institute of Technology (Eidgenössische Technische Hochschule, or ETH) Zurich, at a recent ceremony.

According to the award citation, Hornung was honored by ETH Zurich for his outstanding research contributions in the field of fluid dynamics and his "extraordinary ability to be inspiring when passing his knowledge on to his students."

Hornung, the C. L. "Kelly" Johnson Professor of Aeronautics, Emeritus, served as the fourth director of the Graduate Aeronautical (now Aerospace) Laboratories at Caltech.

"This is a very well-deserved international honor for Professor Hornung. As his colleague and a past director of the Graduate Aerospace Laboratories at Caltech, I have firsthand knowledge of his dedication to research and teaching," says Ares J. Rosakis, Theodore von Kármán Professor of Aeronautics and professor of mechanical engineering, and chair of the Division of Engineering and Applied Science.

ETH Zurich rector Lino Guzzella presented the honorary degree to Hornung in a ceremony on November 17 in the Hauptegebaeude, the main building on the Zurich campus. The other recipient of the honor was Lord Martin Rees, former master of Trinity College, University of Cambridge, and the United Kingdom's Astronomer Royal.

Writer: 
Brian Bell
Frontpage Title: 
Hornung Awarded Honorary Doctorate
Listing Title: 
Hornung Awarded Honorary Doctorate
Contact: 
Writer: 
Exclude from News Hub: 
Yes

Top 12 in 2012

Frontpage Title: 
Top 12 in 2012
Slideshow: 
Credit: Benjamin Deverman/Caltech

Gene therapy for boosting nerve-cell repair

Caltech scientists have developed a gene therapy that helps the brain replace its nerve-cell-protecting myelin sheaths—and the cells that produce those sheaths—when they are destroyed by diseases like multiple sclerosis and by spinal-cord injuries. Myelin ensures that nerve cells can send signals quickly and efficiently.

Credit: L. Moser and P. M. Bellan, Caltech

Understanding solar flares

By studying jets of plasma in the lab, Caltech researchers discovered a surprising phenomenon that may be important for understanding how solar flares occur and for developing nuclear fusion as an energy source. Solar flares are bursts of energy from the sun that launch chunks of plasma that can damage orbiting satellites and cause the northern and southern lights on Earth.

Coincidence—or physics?

Caltech planetary scientists provided a new explanation for why the "man in the moon" faces Earth. Their research indicates that the "man"—an illusion caused by dark-colored volcanic plains—faces us because of the rate at which the moon's spin rate slowed before becoming locked in its current orientation, even though the odds favored the moon's other, more mountainous side.

Choking when the stakes are high

In studying brain activity and behavior, Caltech biologists and social scientists learned that the more someone is afraid of loss, the worse they will perform on a given task—and that, the more loss-averse they are, the more likely it is that their performance will peak at a level far below their actual capacity.

Credit: NASA/JPL-Caltech

Eyeing the X-ray universe

NASA's NuSTAR telescope, a Caltech-led and -designed mission to explore the high-energy X-ray universe and to uncover the secrets of black holes, of remnants of dead stars, of energetic cosmic explosions, and even of the sun, was launched on June 13. The instrument is the most powerful high-energy X-ray telescope ever developed and will produce images that are 10 times sharper than any that have been taken before at these energies.

Credit: CERN

Uncovering the Higgs Boson

This summer's likely discovery of the long-sought and highly elusive Higgs boson, the fundamental particle that is thought to endow elementary particles with mass, was made possible in part by contributions from a large contingent of Caltech researchers. They have worked on this problem with colleagues around the globe for decades, building experiments, designing detectors to measure particles ever more precisely, and inventing communication systems and data storage and transfer networks to share information among thousands of physicists worldwide.

Credit: Peter Day

Amplifying research

Researchers at Caltech and NASA's Jet Propulsion Laboratory developed a new kind of amplifier that can be used for everything from exploring the cosmos to examining the quantum world. This new device operates at a frequency range more than 10 times wider than that of other similar kinds of devices, can amplify strong signals without distortion, and introduces the lowest amount of unavoidable noise.

Swims like a jellyfish

Caltech bioengineers partnered with researchers at Harvard University to build a freely moving artificial jellyfish from scratch. The researchers fashioned the jellyfish from silicon and muscle cells into what they've dubbed Medusoid; in the lab, the scientists were able to replicate some of the jellyfish's key mechanical functions, such as swimming and creating feeding currents. The work will help improve researchers' understanding of tissues and how they work, and may inform future efforts in tissue engineering and the design of pumps for the human heart.

Credit: NASA/JPL-Caltech

Touchdown confirmed

After more than eight years of planning, about 354 million miles of space travel, and seven minutes of terror, NASA's Mars Science Laboratory successfully landed on the Red Planet on August 5. The roving analytical laboratory, named Curiosity, is now using its 10 scientific instruments and 17 cameras to search Mars for environments that either were once—or are now—habitable.

Credit: Caltech/Michael Hoffmann

Powering toilets for the developing world

Caltech engineers built a solar-powered toilet that can safely dispose of human waste for just five cents per use per day. The toilet design, which won the Bill and Melinda Gates Foundation's Reinventing the Toilet Challenge, uses the sun to power a reactor that breaks down water and human waste into fertilizer and hydrogen. The hydrogen can be stored as energy in hydrogen fuel cells.

Credit: Caltech / Scott Kelberg and Michael Roukes

Weighing molecules

A Caltech-led team of physicists created the first-ever mechanical device that can measure the mass of an individual molecule. The tool could eventually help doctors to diagnose diseases, and will enable scientists to study viruses, examine the molecular machinery of cells, and better measure nanoparticles and air pollution.

Splitting water

This year, two separate Caltech research groups made key advances in the quest to extract hydrogen from water for energy use. In June, a team of chemical engineers devised a nontoxic, noncorrosive way to split water molecules at relatively low temperatures; this method may prove useful in the application of waste heat to hydrogen production. Then, in September, a group of Caltech chemists identified the mechanism by which some water-splitting catalysts work; their findings should light the way toward the development of cheaper and better catalysts.

Body: 

In 2012, Caltech faculty and students pursued research into just about every aspect of our world and beyond—from understanding human behavior, to exploring other planets, to developing sustainable waste solutions for the developing world.

In other words, 2012 was another year of discovery at Caltech. Here are a dozen research stories, which were among the most widely read and shared articles from Caltech.edu.

Did we skip your favorite? Connect with Caltech on Facebook to share your pick.

Exclude from News Hub: 
Yes

A New Tool for Secret Agents—And the Rest of Us

Caltech engineers make tiny, low-cost, terahertz imager chip

PASADENA, Calif.—A secret agent is racing against time. He knows a bomb is nearby. He rounds a corner, spots a pile of suspicious boxes in the alleyway, and pulls out his cell phone. As he scans it over the packages, their contents appear onscreen. In the nick of time, his handy smartphone application reveals an explosive device, and the agent saves the day. 

Sound far-fetched? In fact it is a real possibility, thanks to tiny inexpensive silicon microchips developed by a pair of electrical engineers at the California Institute of Technology (Caltech). The chips generate and radiate high-frequency electromagnetic waves, called terahertz (THz) waves, that fall into a largely untapped region of the electromagnetic spectrum—between microwaves and far-infrared radiation—and that can penetrate a host of materials without the ionizing damage of X-rays. 

When incorporated into handheld devices, the new microchips could enable a broad range of applications in fields ranging from homeland security to wireless communications to health care, and even touchless gaming. In the future, the technology may lead to noninvasive cancer diagnosis, among other applications.

"Using the same low-cost, integrated-circuit technology that's used to make the microchips found in our cell phones and notepads today, we have made a silicon chip that can operate at nearly 300 times their speed," says Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering at Caltech. "These chips will enable a new generation of extremely versatile sensors." 

Hajimiri and postdoctoral scholar Kaushik Sengupta (PhD '12) describe the work in the December issue of IEEE Journal of Solid-State Circuits

Researchers have long touted the potential of the terahertz frequency range, from 0.3 to 3 THz, for scanning and imaging. Such electromagnetic waves can easily penetrate packaging materials and render image details in high resolution, and can also detect the chemical fingerprints of pharmaceutical drugs, biological weapons, or illegal drugs or explosives. However, most existing terahertz systems involve bulky and expensive laser setups that sometimes require exceptionally low temperatures. The potential of terahertz imaging and scanning has gone untapped because of the lack of compact, low-cost technology that can operate in the frequency range.

To finally realize the promise of terahertz waves, Hajimiri and Sengupta used complementary metal-oxide semiconductor, or CMOS, technology, which is commonly used to make the microchips in everyday electronic devices, to design silicon chips with fully integrated functionalities and that operate at terahertz frequencies—but fit on a fingertip.

"This extraordinary level of creativity, which has enabled imaging in the terahertz frequency range, is very much in line with Caltech's long tradition of innovation in the area of CMOS technology," says Ares Rosakis, chair of Caltech's Division of Engineering and Applied Science. "Caltech engineers, like Ali Hajimiri, truly work in an interdisciplinary way to push the boundaries of what is possible."

The new chips boast signals more than a thousand times stronger than existing approaches, and emanate terahertz signals that can be dynamically programmed to point in a specified direction, making them the world's first integrated terahertz scanning arrays.

Using the scanner, the researchers can reveal a razor blade hidden within a piece of plastic, for example, or determine the fat content of chicken tissue. "We are not just talking about a potential. We have actually demonstrated that this works," says Hajimiri. "The first time we saw the actual images, it took our breath away." 

Hajimiri and Sengupta had to overcome multiple hurdles to translate CMOS technology into workable terahertz chips—including the fact that silicon chips are simply not designed to operate at terahertz frequencies. In fact, every transistor has a frequency, known as the cut-off frequency, above which it fails to amplify a signal—and no standard transistors can amplify signals in the terahertz range. 

To work around the cut-off-frequency problem, the researchers harnessed the collective strength of many transistors operating in unison. If multiple elements are operated at the right times at the right frequencies, their power can be combined, boosting the strength of the collective signal. 

"We came up with a way of operating transistors above their cut-off frequencies," explains Sengupta. "We are about 40 or 50 percent above the cut-off frequencies, and yet we are able to generate a lot of power and detect it because of our novel methodologies."

"Traditionally, people have tried to make these technologies work at very high frequencies, with large elements producing the power. Think of these as elephants," says Hajimiri. "Nowadays we can make a very large number of transistors that individually are not very powerful, but when combined and working in unison, can do a lot more. If these elements are synchronized—like an army of ants—they can do everything that the elephant does and then some."

The researchers also figured out how to radiate, or transmit, the terahertz signal once it has been produced. At such high frequencies, a wire cannot be used, and traditional antennas at the microchip scale are inefficient. What they came up with instead was a way to turn the whole silicon chip into an antenna. Again, they went with a distributed approach, incorporating many small metal segments onto the chip that can all be operated at a certain time and strength to radiate the signal en masse.

"We had to take a step back and ask, 'Can we do this in a different way?'" says Sengupta. "Our chips are an example of the kind of innovations that can be unearthed if we blur the partitions between traditional ways of thinking about integrated circuits, electromagnetics, antennae, and the applied sciences. It is a holistic solution."

 The paper is titled "A 0.28 THz Power-Generation and Beam-Steering Array in CMOS Based on Distributed Active Radiators." IBM helped with chip fabrication for this work.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Point of Light

Caltech engineers invent light-focusing device that may lead to applications in computing, communications, and imaging

PASADENA, Calif.—As technology advances, it tends to shrink. From cell phones to laptops—powered by increasingly faster and tinier processors—everything is getting thinner and sleeker. And now light beams are getting smaller, too.

Engineers at the California Institute of Technology (Caltech) have created a device that can focus light into a point just a few nanometers (billionths of a meter) across—an achievement they say may lead to next-generation applications in computing, communications, and imaging.

Because light can carry greater amounts of data more efficiently than electrical signals traveling through copper wires, today's technology is increasingly based on optics. The world is already connected by thousands of miles of optical-fiber cables that deliver email, images, and the latest video gone viral to your laptop.

As we all produce and consume more data, computers and communication networks must be able to handle the deluge of information. Focusing light into tinier spaces can squeeze more data through optical fibers and increase bandwidth. Moreover, by being able to control light at such small scales, optical devices can also be made more compact, requiring less energy to power them.

But focusing light to such minute scales is inherently difficult. Once you reach sizes smaller than the wavelength of light—a few hundred nanometers in the case of visible light—you reach what's called the diffraction limit, and it's physically impossible to focus the light any further.

But now the Caltech researchers, co-led by assistant professor of electrical engineering Hyuck Choo, have built a new kind of waveguide—a tunnellike device that channels light—that gets around this natural limit. The waveguide, which is described in a recent issue of the journal Nature Photonics, is made of amorphous silicon dioxide—which is similar to common glass—and is covered in a thin layer of gold. Just under two microns long, the device is a rectangular box that tapers to a point at one end.

As light is sent through the waveguide, the photons interact with electrons at the interface between the gold and the silicon dioxide. Those electrons oscillate, and the oscillations propagate along the device as waves—similarly to how vibrations of air molecules travel as sound waves. Because the electron oscillations are directly coupled with the light, they carry the same information and properties—and they therefore serve as a proxy for the light.

Instead of focusing the light alone—which is impossible due to the diffraction limit—the new device focuses these coupled electron oscillations, called surface plasmon polaritons (SPPs). The SPPs travel through the waveguide and are focused as they go through the pointy end.

Because the new device is built on a semiconductor chip with standard nanofabrication techniques, says Choo, the co-lead and the co-corresponding author of the paper, it is easy integrate with today's technology

Previous on-chip nanofocusing devices were only able to focus light into a narrow line. They also were inefficient, typically focusing only a few percent of the incident photons, with the majority absorbed and scattered as they traveled through the devices.

With the new device, light can ultimately be focused in three dimensions, producing a point a few nanometers across, and using half of the light that's sent through, Choo says. (Focusing the light into a slightly bigger spot, 14 by 80 nanometers in size, boosts the efficiency to 70 percent). The key feature behind the device's focusing ability and efficiency, he says, is its unique design and shape.

"Our new device is based on fundamental research, but we hope it's a good building block for many potentially revolutionary engineering applications," says Myung-Ki Kim, a postdoctoral scholar and the other lead author of the paper.

For example, one application is to turn this nanofocusing device into an efficient, high-resolution biological-imaging instrument, Kim says. A biologist can dye specific molecules in a cell with fluorescent proteins that glow when struck by light. Using the new device, a scientist can focus light into the cell, causing the fluorescent proteins to shine. Because the device concentrates light into such a small point, it can create a high-resolution map of those dyed molecules. Light can also travel in the reverse direction through the nanofocuser: by collecting light through the narrow point, the device turns into a high-resolution microscope. 

The device can also lead to computer hard drives that hold more memory via heat-assisted magnetic recording. Normal hard drives consist of rows of tiny magnets whose north and south poles lay end to end. Data is recorded by applying a magnetic field to switch the polarity of the magnets.

Smaller magnets would allow more memory to be squeezed into a disc of a given size. But the polarities of smaller magnets made of current materials are unstable at room temperature, causing the magnetic poles to spontaneously flip—and for data to be lost. Instead, more stable materials can be used—but those require heat to record data. The heat makes the magnets more susceptible to polarity reversals. Therefore, to write data, a laser is needed to heat the individual magnets, allowing a surrounding magnetic field to flip their polarities.

Today's technology, however, can't focus a laser into a beam that is narrow enough to individually heat such tiny magnets. Indeed, current lasers can only concentrate a beam to an area 300 nanometers wide, which would heat the target magnet as well as adjacent ones—possibly spoiling other recorded data.

Because the new device can focus light down to such small scales, it can heat smaller magnets individually, making it possible for hard drives to pack more magnets and therefore more memory. With current technology, discs can't hold more than 1 terabyte (1,000 gigabytes) per square inch. A nanofocusing device, Choo says, can bump that to 50 terabytes per square inch.

Then there's the myriad of data-transfer and communication applications, the researchers say. As computing becomes increasingly reliant on optics, devices that concentrate and control data-carrying light at the nanoscale will be essential—and ubiquitous, says Choo, who is a member of the Kavli Nanoscience Institute at Caltech. "Don't be surprised if you see a similar kind of device inside a computer you may someday buy."

The next step is to optimize the design and to begin building imaging instruments and sensors, Choo says. The device is versatile enough that relatively simple modifications could allow it to be used for imaging, computing, or communication.

The title of the Nature Photonics paper is "Nanofocusing in a metal-insulator-metal gap plasmon waveguide with a three-dimensional linear taper." In addition to Choo and Kim, the other authors are Matteo Staffaroni, Tae Joon Seok, Jeffrey Bokor, Ming C. Wu, and Eli Yablonovitch of UC Berkeley and Stefano Cabrini and P. James Schuck of the Molecular Foundry at Lawrence Berkeley National Lab. The research was funded by the Defense Advanced Research Projects Agency (DARPA) Science and Technology Surface-Enhanced Raman Spectroscopy program, the Department of Energy, and the Division of Engineering and Applied Science at Caltech.

 

 

 

 

 

 

 

 

 

 

 

 

This video shows the final fabrication step of the nanofocusing device. A stream of high-energy gallium ions blasts away unwanted layers of gold and silicon dioxide to carve out the shape of the device.

 

 

Writer: 
Marcus Woo
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

3-D Dentistry

A Caltech imaging innovation will ease your trip to the dentist and may soon energize home entertainment systems too.

Although dentistry has come a long way since the time when decayed teeth were extracted by brute force, most dentists are still using the clumsy, time-consuming, and imperfect impression method when making crowns or bridges. But that process could soon go the way of general anesthesia in family dentistry thanks to a 3-D imaging device developed by Mory Gharib, Caltech vice provost and Hans W. Liepmann Professor of Aeronautics and professor of bioinspired engineering.

By the mid-2000s, complex dental imaging machines—also called dental scanners—began appearing on the market. The devices take pictures of teeth that can be used to create crowns and bridges via computer-aided design/computer-aided manufacturing (CAD/CAM) techniques, giving the patient a new tooth the same day. But efficiency doesn't come without cost—and at more than $100,000 for an entire system, few dentists can afford to invest in the equipment. Within that challenge, Gharib saw an opportunity.

An expert in biomedical engineering, Gharib had built a 3-D microscope in 2006 to help him design better artificial heart valves and other devices for medical applications. Since it's not very practical to view someone's mouth through a microscope, he thought that he could design and build an affordable and portable 3-D camera that would do the same job as the expensive dental scanners.

The system he came up with is surprisingly simple. The camera, which fits into a handheld device, has three apertures that take a picture of the tooth at the same time but from different angles. The three images are then blended together using a computer algorithm to construct a 3-D image. In 2009, Gharib formed a company called Arges Imaging to commercialize the product; last year, Arges was acquired by a multinational dental-technology manufacturer that has been testing the camera with dentists.

"Professor Gharib is as brilliant a scientist as he is an engineer and inventor," says Ares Rosakis, chair of Caltech's division of engineering and applied science. "I think that's what we have to do to look at humanity's big problems: we have to be ready to act as pure scientists when we observe and discover as well as act as practical engineers when we invent and apply. This continuous interplay happens at Caltech better than at other institutions."

Indeed, Gharib did not stop with dental applications for his 3-D scanner, but quickly realized that the technology had promise in other industries. For example, there are many potential applications in consumer electronics and other products, he says. While motion-sensing devices with facial and voice-recognition capabilities, like Microsoft's Kinect for the Xbox 360, allow players to feel like they are in the game— running, jumping, and flying over obstacles—"the gestures required are extreme," says Gharib. A more sophisticated imager could make players really feel like they are part of the action.

In robotic and reconstructive surgery, a 3-D imager could provide surgeons with a tool to help them achieve better accuracy and precision. "What if I could take a 3-D picture of your head and have a machine sculpt it into a bust?" says Gharib. "With CAD/CAM, you can take a computer design and turn that into a sculpture, but you need someone who is expert at programming. What if a camera could take a photo and give you 3-D perspective? We have expensive 3-D motion-picture cameras now and 3-D displays, but we don't have much media for them," says Gharib, who earlier this year formed a new company called Apertura Imaging to try to improve the 3-D imaging technology for these nondental applications. "Once we build this new camera, people will come up with all sorts of applications," he says.

Writer: 
Michael Rogers
Writer: 
Exclude from News Hub: 
No

One Metal Scoop, Slightly Used

It's a science fiction staple: human astronauts visiting an alien world find a derelict spacecraft sent there . . . by themselves. It has actually happened—once—in real life. On November 19, 1969, the Apollo 12 Lunar Excursion Module (LEM) touched down on the vast plains of the Oceanus Procellarum, or Ocean of Storms, less than 200 yards from the Surveyor 3 probe, which Caltech's Jet Propulsion Laboratory had sent to the moon two and a half years earlier. On November 20, after a day spent collecting rock samples, astronauts Charles Conrad and Alan Bean retrieved some parts from Surveyor as well, bringing them back to NASA engineers eager to learn what long-term exposure would do to electronic cabling and other delicate components. Among their souvenirs was the metal scoop that Caltech soils engineer Ronald Scott had used to verify that a moon landing could be made in the first place.

In 1960, when President Kennedy had announced his intention to put a man on the moon, almost the only thing we knew for sure about the lunar surface was that it wasn't made of green cheese. It was entirely reasonable to suppose that the moon's dark "seas" were in fact oceans of dust pulverized by eons of relentless meteor bombardment. Was the dust a few inches deep? Or was it bottomless, waiting to swallow up and drown the man with the temerity to take one small step on it?

JPL sent a series of robotic explorers to find out. The preliminary designs for the Surveyors, drawn up in 1960, included an onboard soil analyzer. Weight constraints led to its removal a year or so later, but by then, the soil scoop—roughly the size and shape of a clenched fist, with a one-by-two-inch trapdoor on its underside—and its retractable arm had already been built.

In 1963, Scott, then an associate professor of civil engineering at Caltech, proposed that the scoop and arm be reinstated as a soil-mechanics experiment. By outfitting the arm with strain gauges, he could measure the force exerted when the scoop's flat bottom was pressed into the soil and calculate the load it could bear. Simply landing the Surveyor was insufficient evidence of the soil's stability; JPL's robotic emissary weighed a measly 650 pounds, but the LEM would be more than 16 tons of hardware, fuel, and humanity.

Problem was, with the soil lab gone, the arm's mounting hardware had been eliminated as well. But Scott was persistent, and in the summer of 1966, JPL engineers found a work-around: the arm could replace the downward-looking approach camera. However, the arm's connections would have to be rebuilt to fit the camera mount, and Surveyor 3's April 1967 launch date was coming up fast. There wasn't enough time to build the strain gauges as well, but Scott had a Plan B—he would measure the current drawn by the arm's motors, and thus derive the force they were exerting.

Surveyor 3's time on the moon got off to a bumpy start. Literally. The small thrusters used to slow the final descent failed to shut off, and the lander bounded skyward again a moment after touchdown, soaring to a height of 35 feet and becoming the first spacecraft ever to depart (albeit unintentionally) from the moon. The thrusters were finally shut down from Earth on the second bounce, and Surveyor came to rest on the gently sloping wall of the small, shallow crater that now bears its name.

As principal investigator for the Soil Mechanics Surface Sampler—as it was known in NASA-speak—Scott shared a desk in the Space Flight Operations Facility (JPL's version of Mission Control) with his JPL counterpart, Floyd Roberson. The arm responded to Roberson's commands, but Plan B quickly went awry. "The condition of the spacecraft telemetry prevented making measurements of the motor currents," as Scott explained in the June 1967 issue of Caltech's Engineering & Science magazine. Now their only source of data was Surveyor's TV camera, which fortunately was mounted on the same side of the lander's triangular body. 

On to Plan C: They would let the scoop rest on the surface and then drive it into the soil until the motors stalled to see how deeply it would penetrate. As each new frame from the camera appeared on the control-room monitors, someone snapped a Polaroid. After measuring the scoop's movement between frames, Roberson would write down the commands for the scoop's next move—backwards, as the camera took pictures through a swiveling mirror—and hand them to the man at the next console, who double-checked them and passed them to the controller, who typed them in. The humans worked faster than the camera did, so they were essentially moving the arm in real time.

Scott and Roberson poked at the soil, dug trenches, tried to pick up pebbles, and generally carried on like kids at the beach for the rest of the day—the lunar day, that is, or two weeks on Earth. "The lunar soil is fine-grained material . . . similar to a dry terrestrial sand," Scott wrote in E&S, and it got denser as they dug. "The deepest trench was approximately seven inches, and the material at that depth was relatively firm compared to the surface." And with that, all systems were "go" for Apollo, at least as far as the soil was concerned.

When the sun finally set on Surveyor on May 3, Scott wrote in a later E&S article in 1970, "For no particular reason that I can recall, we tidily raised the surface sampler as high as it would go and moved it to the extreme right." Surveyor quietly froze to death in the cold lunar night, and Scott went back to his day job—which included being a member of the Apollo Soil Mechanics Team.

Scott was at Mission Control in Houston on July 20, 1969, when Neil Armstrong landed the Eagle in the ultimate check of Scott's math. He was there again on November 20, listening in as Apollo 12's Charles Conrad and Alan Bean "made their way to Surveyor and began poking around." He was astonished, he recalled in E&S, when "Conrad remarked casually that he had got the scoop." This was not in the plan: the astronauts' wire cutters were no match for the steel tape that retracted the arm. But, Conrad later told Scott, he'd put the cutters to the tape and given it an experimental twist. "To his surprise," Scott wrote, "the tape parted at a weld. All he needed to do to free the scoop was to snip through three aluminum supporting arms and some wires behind the first joint," which was possible only because "we had fortuitously left the sampler in its most elevated position. Astronauts in space suits cannot at present bend down."

Several weeks later, Scott returned to Houston, this time to the Lunar Receiving Laboratory to witness the opening of the "murky Teflon" double bag in which the scoop had been sealed. He and Roberson had emptied the scoop more than two years earlier, as part of their tidying up at the end of their lunar digging, but some moondust and a little grit had clung to it and made it back to Earth anyway. "If I had known I was going to see it again," he wrote, "I would have left the scoop completely packed with lunar soil."

Scott went on to design the soil scoop for NASA's Viking landers, which searched for life on Mars in the 1970s. He died in 2005, but Associate Professor of Civil and Mechanical Engineering José Andrade has picked up the shovel, as it were. Andrade is on an advisory panel for JPL's InSight mission to Mars, slated to launch in 2016. InSight's instruments include a heat-flow probe that will hammer itself some 10 to 15 feet into the martian soil. Scott would be pleased.

Writer: 
Douglas Smith
Writer: 
Exclude from News Hub: 
No

Pages

Subscribe to RSS - EAS