Caltech Scientist Awarded Grant to Develop Solar-Powered Sanitation System

PASADENA, Calif.—Environmental scientist and engineer Michael Hoffmann of the California Institute of Technology (Caltech) has received a $400,000 grant from the Bill & Melinda Gates Foundation to build a solar-powered portable toilet that could help solve a major health problem in developing countries. The grant, announced July 19 at the AfricaSan 3 sanitation and hygiene conference in Rwanda, will be used to complete the initial design, development, and testing of the unique sustainable system. Designed for use by up to 500 people per day with minimal maintenance, the sanitation unit will have the added benefit of turning waste into fuel.

Hoffmann's concept, called a "Self-Contained, PV-Powered Domestic Toilet and Wastewater Treatment System," is one of eight projects funded through the foundation's "Reinvent the Toilet Challenge." The Bill & Melinda Gates Foundation announced this grant as part of more than $40 million in new investments launching its Water, Sanitation, & Hygiene strategy. According to the World Health Organization (WHO) and UNICEF, about 2.6 billion people—approximately 40 percent of the world's population—lack access to safe sanitation, and nearly half of them practice open defecation. In addition, WHO estimates that 1.5 million children die each year from diarrheal disease, which is often caused by poor sanitation.

"Life expectancy correlates to the accessibility of clean water and proper sanitation practices," says Hoffmann, the James Irvine Professor of Environmental Science at Caltech, who has been working for years on the electrochemical technology to create a sustainable toilet and waste-treatment system. "All of our efforts in biomedicine may go for naught if we don't take care of sanitation."

Hoffmann's toilet system could fit inside the typical portable sanitation unit often found at construction sites and recreation areas, but the comparison ends there. It starts with a photovoltaic or solar panel, which converts the sun's rays into enough energy to power an electrochemical reactor that Hoffmann designed to break down water and human waste material into hydrogen gas. The hydrogen gas can then be stored in hydrogen fuel cells to provide a backup energy source for nighttime operation or for use under low-sunlight conditions. Hoffmann also envisions equipping the units with self-cleaning toilets that would also be powered by the energy from the sun and fuel cells.

Hoffmann says that he can build a workable unit for $2,000, but that the cost would come down significantly if the toilets were produced in volume. Following production of a prototype under the Gates Foundation grant, Hoffmann hopes to continue the project to refine the system and reduce its cost. In August 2012, all "Reinvent the Toilet Challenge" grantees will present their prototypes, with winning projects to receive additional funding for product development, industrial production, and commercialization.

"To address the needs of the 2.6 billion people who don't have access to safe sanitation, we not only must reinvent the toilet, we also must find safe, affordable, and sustainable ways to capture, treat, and recycle human waste," says Sylvia Mathews Burwell, president of the Global Development Program at the Bill & Melinda Gates Foundation. "Most importantly, we must work closely with local communities to develop lasting sanitation solutions that will improve their lives."

A member of the Caltech faculty since 1980, Hoffmann was honored in 2010 by the National Taiwan University as a Distinguished Visiting Chair Professor and by the State of Kerala, India, as an Erudite Distinguished Scholar. Earlier this year, Hoffmann was elected to the National Academy of Engineering. He is the organizing chair of the upcoming International Conference on the Photochemical Conversion and Storage of Solar Energy, which will be held on the Caltech campus at the end of July 2012. 

Michael Rogers
Exclude from News Hub: 

Wind-turbine Placement Produces Tenfold Power Increase, Caltech Researchers Say

PASADENA, Calif.—The power output of wind farms can be increased by an order of magnitude—at least tenfold—simply by optimizing the placement of turbines on a given plot of land, say researchers at the California Institute of Technology (Caltech) who have been conducting a unique field study at an experimental two-acre wind farm in northern Los Angeles County.

A paper describing the findings—the results of field tests conducted by John Dabiri, Caltech professor of aeronautics and bioengineering, and colleagues during the summer of 2010—appears in the July issue of the Journal of Renewable and Sustainable Energy.

Dabiri's experimental farm, known as the Field Laboratory for Optimized Wind Energy (FLOWE), houses 24 10-meter-tall, 1.2-meter-wide vertical-axis wind turbines (VAWTs)—turbines that have vertical rotors and look like eggbeaters sticking out of the ground. Half a dozen turbines were used in the 2010 field tests.

Despite improvements in the design of wind turbines that have increased their efficiency, wind farms are rather inefficient, Dabiri notes. Modern farms generally employ horizontal-axis wind turbines (HAWTs)—the standard propeller-like monoliths that you might see slowly turning, all in the same direction, in the hills of Tehachapi Pass, north of Los Angeles.

In such farms, the individual turbines have to be spaced far apart—not just far enough that their giant blades don't touch. With this type of design, the wake generated by one turbine can interfere aerodynamically with neighboring turbines, with the result that "much of the wind energy that enters a wind farm is never tapped," says Dabiri. He compares modern farms to "sloppy eaters," wasting not just real estate (and thus lowering the power output of a given plot of land) but much of the energy resources they have available to them.

Designers compensate for the energy loss by making bigger blades and taller towers, to suck up more of the available wind and at heights where gusts are more powerful. "But this brings other challenges," Dabiri says, such as higher costs, more complex engineering problems, a larger environmental impact. Bigger, taller turbines, after all, mean more noise, more danger to birds and bats, and—for those who don't find the spinning spires visually appealing—an even larger eyesore.

The solution, says Dabiri, is to focus instead on the design of the wind farm itself, to maximize its energy-collecting efficiency at heights closer to the ground. While winds blow far less energetically at, say, 30 feet off the ground than at 100 feet, "the global wind power available 30 feet off the ground is greater than the world's electricity usage, several times over," he says. That means that enough energy can be obtained with smaller, cheaper, less environmentally intrusive turbines—as long as they're the right turbines, arranged in the right way.

VAWTs are ideal, Dabiri says, because they can be positioned very close to one another. This lets them capture nearly all of the energy of the blowing wind and even wind energy above the farm. Having every turbine turn in the opposite direction of its neighbors, the researchers found, also increases their efficiency, perhaps because the opposing spins decrease the drag on each turbine, allowing it to spin faster (Dabiri got the idea for using this type of constructive interference from his studies of schooling fish).

In the summer 2010 field tests, Dabiri and his colleagues measured the rotational speed and power generated by each of the six turbines when placed in a number of different configurations. One turbine was kept in a fixed position for every configuration; the others were on portable footings that allowed them to be shifted around.

The tests showed that an arrangement in which all of the turbines in an array were spaced four turbine diameters apart (roughly 5 meters, or approximately 16 feet) completely eliminated the aerodynamic interference between neighboring turbines. By comparison, removing the aerodynamic interference between propeller-style wind turbines would require spacing them about 20 diameters apart, which means a distance of more than one mile between the largest wind turbines now in use.

The six VAWTs generated from 21 to 47 watts of power per square meter of land area; a comparably sized HAWT farm generates just 2 to 3 watts per square meter.

"Dabiri's bioinspired engineering research is challenging the status quo in wind-energy technology," says Ares Rosakis, chair of Caltech's Division of Engineering and Applied Science and the Theodore von Kármán Professor of Aeronautics and professor of mechanical engineering. "This exemplifies how Caltech engineers' innovative approaches are tackling our society's greatest problems."

"We're on the right track, but this is by no means 'mission accomplished,'" Dabiri says. "The next steps are to scale up the field demonstration and to improve upon the off-the-shelf wind-turbine designs used for the pilot study." Still, he says, "I think these results are a compelling call for further research on alternatives to the wind-energy status quo."

This summer, Dabiri and colleagues are studying a larger array of 18 VAWTs to follow up last year's field study. Video and images of the field site can be found at

Kathy Svitil
Exclude from News Hub: 
News Type: 
Research News

Going with the Flow: Caltech Researchers Find Compaction Bands in Sandstone are Permeable

Findings could aid in the development of better technologies for hydraulic fracturing and other fluid extraction techniques from the earth

PASADENA, Calif.—When geologists survey an area of land for the potential that gas or petroleum deposits could exist there, they must take into account the composition of rocks that lie below the surface. Take, for instance, sandstone—a sedimentary rock composed mostly of weakly cemented quartz grains. Previous research had suggested that compaction bands—highly compressed, narrow, flat layers within the sandstone—are much less permeable than the host rock and might act as barriers to the flow of oil or gas. 

Now, researchers led by José Andrade, associate professor of civil and mechanical engineering at the California Institute of Technology (Caltech), have analyzed X-ray images of Aztec sandstone and revealed that compaction bands are actually more permeable than earlier models indicated. While they do appear to be less permeable than the surrounding host rock, they do not appear to block the flow of fluids. Their findings were reported in the May 17 issue of Geophysical Research Letters.

The study includes the first observations and calculations that show fluids have the ability to flow in sandstone that has compaction bands. Prior to this study, there had been inferences of how permeable these formations were, but those inferences were made from 2D images. This paper provides the first permeability calculations based on actual rock samples taken directly from the field in the Valley of Fire, Nevada. From the data they collected, the researchers concluded that these formations are not as impermeable as previously believed, and that therefore their ability to trap fluids—like oil, gas, and CO2—should be measured based on 3D images taken from the field.

"These results are very important for the development of new technologies such as CO2 sequestration—removing CO2 from the atmosphere and depositing it in an underground reservoir—and hydraulic fracturing of rocks for natural gas extraction," says Andrade. "The quantitative connection between the microstructure of the rock and the rock's macroscopic properties, such as hydraulic conductivity, is crucial, as physical processes are controlled by pore-scale features in porous materials. This work is at the forefront of making this quantitative connection."

Compaction bands at multiple scales ranging from the field scale to the specimen scale to the meso and grain scale. At the field scale, picture shows the presence of narrow tabular structures within the host rock in the Valley of Fire. At the grain scale, images show clear differences in porosity (dark spots) density. This research aims at quantifying the impact of grain scale features in macroscopic physical properties that control behavior all the way to the field scale.
Credit: Jose Andrade/Caltech

The research team connected the rocks' 3D micromechanical features—such as grain size distribution, which was obtained using microcomputed tomography images of the rocks to build a 3D model—with quantitative macroscopic flow properties in rocks from the field, which they measured on many different scales. Those measurements were the first ever to look at the three-dimensional ability of compaction bands to transmit fluid. The researchers say the combination of these advanced imaging technologies and multiscale computational models will lead to unprecedentedly accurate measurements of crucial physical properties, such as permeability, in rocks and similar materials. 

Andrade says the team wants to expand these findings and techniques. "An immediate idea involves the coupling of solid deformation and chemistry," he says. "Accounting for the effect of pressures and their potential to exacerbate chemical reactions between fluids and the solid matrix in porous materials, such as compaction bands, remains a fundamental problem with multiple applications ranging from hydraulic fracturing for geothermal energy and natural gas extraction, to applications in biological tissue for modeling important processes such as osteoporosis. For instance, chemical reactions take place as part of the process utilized in fracturing rocks to enhance the extraction of natural gas."

Other coauthors of the paper, "Connecting microstructural attributes and permeability from 3D tomographic images of in situ shear-enhanced compaction bands using multiscale computations," are WaiChing Sun, visiting scholar at Caltech; John Rudnicki, professor of civil and environmental engineering at Northwestern University; and Peter Eichhubl, research scientist in the Bureau of Economic Geology at the University of Texas at Austin.

The work was partially funded by the Geoscience Research Program of the U.S. Department of Energy.

Katie Neith

Caltech Researchers Build Largest Biochemical Circuit Out of Small Synthetic DNA Molecules

PASADENA, Calif.—In many ways, life is like a computer. An organism's genome is the software that tells the cellular and molecular machinery—the hardware—what to do. But instead of electronic circuitry, life relies on biochemical circuitry—complex networks of reactions and pathways that enable organisms to function. Now, researchers at the California Institute of Technology (Caltech) have built the most complex biochemical circuit ever created from scratch, made with DNA-based devices in a test tube that are analogous to the electronic transistors on a computer chip.

Engineering these circuits allows researchers to explore the principles of information processing in biological systems, and to design biochemical pathways with decision-making capabilities. Such circuits would give biochemists unprecedented control in designing chemical reactions for applications in biological and chemical engineering and industries. For example, in the future a synthetic biochemical circuit could be introduced into a clinical blood sample, detect the levels of a variety of molecules in the sample, and integrate that information into a diagnosis of the pathology.

"We're trying to borrow the ideas that have had huge success in the electronic world, such as abstract representations of computing operations, programming languages, and compilers, and apply them to the biomolecular world," says Lulu Qian, a senior postdoctoral scholar in bioengineering at Caltech and lead author on a paper published in the June 3 issue of the journal Science.

Along with Erik Winfree, Caltech professor of computer science, computation and neural systems, and bioengineering, Qian used a new kind of DNA-based component to build the largest artificial biochemical circuit ever made. Previous lab-made biochemical circuits were limited because they worked less reliably and predictably when scaled to larger sizes, Qian explains. The likely reason behind this limitation is that such circuits need various molecular structures to implement different functions, making large systems more complicated and difficult to debug. The researchers' new approach, however, involves components that are simple, standardized, reliable, and scalable, meaning that even bigger and more complex circuits can be made and still work reliably.

"You can imagine that in the computer industry, you want to make better and better computers," Qian says. "This is our effort to do the same. We want to make better and better biochemical circuits that can do more sophisticated tasks, driving molecular devices to act on their environment."

To build their circuits, the researchers used pieces of DNA to make so-called logic gates—devices that produce on-off output signals in response to on-off input signals. Logic gates are the building blocks of the digital logic circuits that allow a computer to perform the right actions at the right time. In a conventional computer, logic gates are made with electronic transistors, which are wired together to form circuits on a silicon chip. Biochemical circuits, however, consist of molecules floating in a test tube of salt water. Instead of depending on electrons flowing in and out of transistors, DNA-based logic gates receive and produce molecules as signals. The molecular signals travel from one specific gate to another, connecting the circuit as if they were wires.

Winfree and his colleagues first built such a biochemical circuit in 2006. In this work, DNA signal molecules connected several DNA logic gates to each other, forming what's called a multilayered circuit. But this earlier circuit consisted of only 12 different DNA molecules, and the circuit slowed down by a few orders of magnitude when expanded from a single logic gate to a five-layered circuit. In their new design, Qian and Winfree have engineered logic gates that are simpler and more reliable, allowing them to make circuits at least five times larger.

Their new logic gates are made from pieces of either short, single-stranded DNA or partially double-stranded DNA in which single strands stick out like tails from the DNA's double helix. The single-stranded DNA molecules act as input and output signals that interact with the partially double-stranded ones.

"The molecules are just floating around in solution, bumping into each other from time to time," Winfree explains. "Occasionally, an incoming strand with the right DNA sequence will zip itself up to one strand while simultaneously unzipping another, releasing it into solution and allowing it to react with yet another strand." Because the researchers can encode whatever DNA sequence they want, they have full control over this process. "You have this programmable interaction," he says.

Qian and Winfree made several circuits with their approach, but the largest—containing 74 different DNA molecules—can compute the square root of any number up to 15 (technically speaking, any four-bit binary number) and round down the answer to the nearest integer. The researchers then monitor the concentrations of output molecules during the calculations to determine the answer. The calculation takes about 10 hours, so it won't replace your laptop anytime soon. But the purpose of these circuits isn't to compete with electronics; it's to give scientists logical control over biochemical processes.

Their circuits have several novel features, Qian says. Because reactions are never perfect—the molecules don't always bind properly, for instance—there's inherent noise in the system. This means the molecular signals are never entirely on or off, as would be the case for ideal binary logic. But the new logic gates are able to handle this noise by suppressing and amplifying signals—for example, boosting a signal that's at 80 percent, or inhibiting one that's at 10 percent, resulting in signals that are either close to 100 percent present or nonexistent.

All the logic gates have identical structures with different sequences. As a result, they can be standardized, so that the same types of components can be wired together to make any circuit you want. What's more, Qian says, you don't have to know anything about the molecular machinery behind the circuit to make one. If you want a circuit that, say, automatically diagnoses a disease, you just submit an abstract representation of the logic functions in your design to a compiler that the researchers provide online, which will then translate the design into the DNA components needed to build the circuit. In the future, an outside manufacturer can then make those parts and give you the circuit, ready to go.

The circuit components are also tunable. By adjusting the concentrations of the types of DNA, the researchers can change the functions of the logic gates. The circuits are versatile, featuring plug-and-play components that can be easily reconfigured to rewire the circuit. The simplicity of the logic gates also allows for more efficient techniques that synthesize them in parallel.  

"Like Moore's Law for silicon electronics, which says that computers are growing exponentially smaller and more powerful every year, molecular systems developed with DNA nanotechnology have been doubling in size roughly every three years," Winfree says. Qian adds, "The dream is that synthetic biochemical circuits will one day achieve complexities comparable to life itself."

The research described in the Science paper, "Scaling up digital circuit computation with DNA strand displacement cascades," is supported by a National Science Foundation grant to the Molecular Programming Project and by the Human Frontier Science Program.

View the researchers' video that explains this work.

Marcus Woo
Exclude from News Hub: 

Caltech Researchers Develop High-Performance Bulk Thermoelectrics

PASADENA, Calif.—Roughly 10 billion miles beyond Neptune's orbit, and well past their 30th birthdays, Voyagers 1 and 2 continue their lonely trek into the Milky Way. And they're still functioning—running on power gleaned not from the pinprick sun, but from solid-state devices called thermoelectric generators, which convert heat energy into electricity.

The same technology can be applied here on Earth to recover waste heat when fuel is burned. "Cogeneration," or the production of electricity as a by-product of a heat-generating process, already provides as much as 10 percent of Europe's electrical power. Systems for this purpose typically operate best at very high temperatures, are costly to build and operate, and suffer from substantial inefficiencies. That's why they can be found in spacecraft and power plants but not, say, in cars.

But recently, scientists have concocted a recipe for a thermoelectric material that might be able to operate off nothing more than the heat of a car's exhaust. In a paper published in Nature this month, G. Jeffrey Snyder, faculty associate in applied physics and materials science at the California Institute of Technology (Caltech), and his colleagues reported on a compound that shows high efficiency at less extreme temperatures.

The heart of a thermoelectric generator is a flat array of semiconductor material. In operation, heat from an external source is directed against one side of the array, while the other side is kept cool. Like air molecules in a hot oven, the material within the array flows along the induced temperature gradient: away from the hot side and toward the cool side. But in the crystalline lattice of a semiconductor, there's only one "material" that isn't rigidly fixed: the charge carriers. Consequently, the only things that move in response to the thermal nonequilibrium are these charge carriers and the result is an electrical flow. Build up a circuit by laying out small semiconductor bricks side by side and wiring them together, and you've got a steady electric current.

The lead telluride (PbTe) family of compounds is commonly used in these applications, but regardless of the underlying technology, scientists designing new thermoelectric materials are continually constrained by structural issues at the most microscopic levels. Those moving charge carriers can run afoul of many complex effects, including electrical interactions, heat-induced vibrations (called phonons), and scattering caused by impurities and imperfections within the crystal structure.

The Caltech researchers began with lead telluride and then added a fractional amount of the element selenium, a concoction first proposed by Soviet scientists A. F. Ioffe and A. V. Ioffe in the 1950s. Because any semiconductor's properties are highly sensitive to the exact type and placement of each of its atoms, this small alteration in the formula produces important changes in the crystal's electronic structure.

Specifically, certain regions called "degenerate valleys" arrange themselves in such a way as to provide a more favorable pathway for charge carriers to follow, a trail of equal-energy stepping stones through the material. In addition, adding the selenium creates multiple regions called point defects. "They're like air bubbles trapped in window glass," says Snyder, "and they tend to scatter vibrations. The result is that heat dissipates more slowly through the material."

That dissipation is important, because in order for a material to be efficient, charge carriers should flow much more easily than heat. In other words, electrical resistance should be low, to maximize current, while thermal resistance should be high, to maintain the temperature gradient that causes the charge carriers to flow in the first place. "It's a delicate tradeoff," says Snyder. "Something like trying to blow ice cream through a straw. If the straw's very narrow, the ice cream moves slowly. But if you widen it to help the ice cream move faster, you'll find that you also run out of air faster."

To make sense of these tradeoffs, scientists speak of a quantity known as the "thermoelectric figure of merit," a dimensionless value that can be used to compare the relative efficiency of materials at specific temperatures. The temperature at which peak efficiency is seen depends on the material: each of the Voyager twins, for instance, produces enough juice to power a medium-sized refrigerator, but to do so it must draw heat from decaying radioisotopes. "These new materials are roughly twice as effective as anything seen before, and they work well in a temperature range of around 400 to 900 degrees Kelvin," says Snyder. "Waste heat recovery from a car's engine falls well within that range."

In other words, the heat escaping out your car's tailpipe could be used to help power the vehicle's electrical components—and not just the radio, wipers, and headlights. "You'll see applications wherever there's a solid-state advantage," Snyder predicts. "One example is the charging system. The electricity to keep your car's battery charged is generated by the alternator, a mechanical device driven by a rubber belt powered by the crankshaft. You've got friction, slippage, strain, internal resistance, wear and tear, and weight, in addition to the mechanical energy extracted to make the electricity. Just replacing that one subsystem with a thermoelectric solution could instantly improve a car's fuel efficiency by 10 percent."

As more automotive systems continue their gradual migration from mechanical or hydraulic to electrical—power steering and brakes, for instance, can both be made to run on electricity—the vehicle of the future will sport more than a passing commonality with the spacecraft of the 1970s. "The future of automobiles is electric," says Snyder. "What we're doing now is looking at how to make it all more efficient."

Snyder's coauthors on the paper, "Convergence of electronic bands for high performance bulk thermoelectrics," are Yanzhong Pei, Aaron LaLonde, and Heng Wang of Caltech; and Xiaoya Shi and Lidong Chen of the Shanghai Institute of Ceramics, Chinese Academy of Sciences. The work was supported by NASA-JPL, the DARPA Nano Materials program, and the Chinese Academy of Sciences.

Dave Zobel

Caltech Research Helps Paraplegic Man Stand and Move Legs Voluntarily

PASADENA, Calif.—A team of researchers from the University of California, Los Angeles (UCLA), the California Institute of Technology (Caltech), and the University of Louisville have used a stimulating electrode array to assist a paralyzed man to stand, step on a treadmill with assistance, and, over time, to regain voluntary movements of his limbs. The electrical signals provided by the array, the researchers have found, stimulate the spinal cord's own neural network so that it can use the sensory input derived from the legs to direct muscle and joint movements.

Rather than bypassing the man's nervous system to directly stimulate the leg muscles, this approach takes advantage of the inherent control circuitry in the lower spinal cord (below the level of the injury) to control standing and stepping motions.

The study is published today in the British medical journal The Lancet.

More than 5.6 million Americans live with some form of paralysis; of these, 1.3 million have had spinal-cord injuries, often resulting in complete paralysis of the lower extremities, along with loss of bladder and bowel control, sexual response, and other autonomous functions.

The work originated with a series of animal experiments beginning in the 1980s by study coauthors V. Reggie Edgerton and Yury Gerasimenko of the David Geffen School of Medicine at UCLA that ultimately showed that animals with spinal-cord injuries could stand, balance, bear weight, and take coordinated steps while being stimulated epidurally—that is, in the space above the dura, the outermost of the three membranes that cover the brain and spinal cord.

Starting eight years ago, Joel Burdick, a professor of mechanical engineering and bioengineering at Caltech, teamed with the Edgerton lab to study how robotically guided physical therapy and pharmacology could be coupled to better recover locomotion in animals with spinal-cord injuries.

Building upon these studies and the earlier work of Edgerton and Gerasimenko, Burdick and Yu-Chong Tai, a Caltech professor of electrical engineering and mechanical engineering, introduced the concept of high-density epidural spinal stimulation, which uses sheet-like arrays of numerous electrodes to stimulate neurons. The goal of the system, Burdick says, "is to stimulate the native standing and stepping control circuitry in the lower spinal cord so as to coordinate sensory-motor activity and partially replace the missing signals from above"—that is, from the brain—"and shout 'get going!' to the nerves."

Electrical leads implanted in the paraplegic patient.
Credit: Medtronic, Inc.

To test this concept, which was first explored in animal models, the team used a commercially available electrode array, which is normally used to treat back pain. While this commercial array does not have all of the capabilities of the arrays tested so far in animals, it allowed the team to test the viability of high-density epidural stimulation in humans. The results, Burdick says, "far exceeded" the researchers' expectations.

The subject in the new work is a 25-year-old former athlete who was completely paralyzed below the chest in a hit-and-run accident in July 2006. He suffered a complete motor injury at the C7/T1 level of the spinal cord, but retained some sensation in his legs.

Before being implanted with the epidural stimulating array, the patient underwent 170 locomotor training sessions over a period of more than two years at the Frazier Rehab Institute. In locomotor training, a rehabilitative technique used on partially paralyzed patients, the body of the patient is suspended in a harness over a moving treadmill while trained therapists repeatedly help manipulate the legs in a repetitive stepping motion.

The training had essentially no effect on this patient, confirming the severity of his spinal injury.  The training also established a "baseline" against which the subsequent efficacy of the electrical stimulation could be measured.

After implantation with the device, however, the patient could—while receiving electrical stimulation, and after a few weeks of locomotor training—push himself into a standing position and bear weight on his own. He can now remain standing, and bearing weight, for 20 minutes at a time. With the aid of a harness support and some therapist assistance, he can make repeated stepping motions on a treadmill.  With repeated daily training and electrical stimulation, the patient regained the ability to voluntarily move his toes, ankles, knees, and hips on command.

The patient has no voluntary control over his limbs when the stimulation is turned off.

In addition, over time he experienced improvements in several types of autonomic function, such as bladder and bowel control, as well as temperature regulation—a "surprise" outcome, Burdick says, that, if replicated in further studies, could substantially improve the lives of patients with spinal-cord injuries.

Credit: The Lancet

These autonomic functions began to return before there was any sign of voluntary movement, which was first seen in the patient about seven months after he began receiving epidural stimulation.

Adds Burdick, "This may help bladder and bowel function even in patients who don't have the strength to undergo rigorous physical training like this patient"—who was an athlete and was in comparatively excellent physical condition before his injury.

The scientists aren't yet fully sure how these functions were regained—or, indeed, how the control of voluntary function was returned through the procedure. "Somehow, stimulation by the electrodes may have reactivated connections that were dormant or stimulated the growth of new connections," Burdick says. Almost certainly, reorganization of the neural pathways occurred below and perhaps also above the site of injury.

Notably, the patient had some sensation in his lower extremities after his injury, which means that the spinal cord was not completely severed; this may have affected the extent of his recovery.

The Food and Drug Administration (FDA) gave the research team approval to test five spinal-cord injury patients; the next patient will be matched with the first, in terms of age, injury, and physical ability, to see if the findings can be replicated. In subsequent trials, patients who have no sensation will be implanted with the device, to see if this influences the outcome.

"This is a significant breakthrough," says Susan Harkema of University of Louisville, the lead author of the paper in The Lancet. "It opens up a huge potential to improve the daily functioning of individuals."

"While these results are obviously encouraging, we need to be cautious, and there is much work to be done," says Edgerton.

One of the biggest obstacles is that the electrode array implanted in the human patient is FDA-approved for back pain only. The use of the FDA-approved device was meant "as a test to see if our concepts would work, providing us with additional ammunition to motivate the development of the arrays used in animal studies," says Burdick. The current FDA-approved arrays, he adds, have many limitations, "hence, the further development of the arrays that have currently only been tested in animals should provide even better human results in the future."

Using a combination of experimentation, computational models of the array and spinal cord, and machine-learning algorithms, Burdick and his colleagues are now trying to optimize the stimulation pattern to achieve the best effects, and to improve the design of the electrode array. Further advances in the technology should lead to better control of the stepping and standing processes.

In addition, he says, "our team is looking at other ways to apply the technology. We may move the array up higher on the spinal column to see if it could affect arms and hands, as well as the legs."

Burdick and his UCLA and University of Louisville colleagues hope that one day, some individuals with complete spinal-cord injuries will be able to use a portable stimulation unit and, with the assistance of a walker, stand independently, maintain balance, and perform some effective stepping. In addition, says Burdick, "our team believes that the protocol might prove useful in the treatment of stroke, Parkinson's, and other disorders affecting motor function."

The research in the paper, "Epidural stimulation of the lumbosacral spinal cord enables voluntary movement, standing, and assisted stepping in a paraplegic human," was funded by the National Institutes of Health with additional support provided by the Christopher and Dana Reeve Foundation. 

Kathy Svitil
Exclude from News Hub: 
News Type: 
Research News

Experiments Settle Long-Standing Debate about Mysterious Array Formations in Nanofilms

PASADENA, Calif.—Scientists at the California Institute of Technology (Caltech) have conducted experiments confirming which of three possible mechanisms is responsible for the spontaneous formation of three-dimensional (3-D) pillar arrays in nanofilms (polymer films that are billionths of a meter thick). These protrusions appear suddenly when the surface of a molten nanofilm is exposed to an extreme temperature gradient and self-organize into hexagonal, lamellar, square, or spiral patterns.

This unconventional means of patterning films is being developed by Sandra Troian, professor of applied physics, aeronautics, and mechanical engineering at Caltech, who uses modulation of surface forces to shape and mold liquefiable nanofilms into 3-D forms. "My ultimate goal is to develop a suite of 3-D lithographic techniques based on remote, digital modulation of thermal, electrical, and magnetic surface forces," Troian says. Confirmation of the correct mechanism has allowed her to deduce the maximum resolution or minimum feature size ultimately possible with these patterning techniques.

In Troian's method, arbitrary shapes are first sculpted from a molten film by surface forces and then instantly solidified in situ by cooling the sample. "These techniques are ideally suited for fabrication of optical or photonic components that exhibit ultrasmooth interfaces," she explains. The process also introduces some interesting new physics that only become evident at the nanoscale. "Even in the land of Lilliputians, these forces are puny at best—but at the nanoscale or smaller still, they rule the world," she says.

The experiments leading to this discovery were highlighted on the cover of the April 29 issue of the journal Physical Review Letters.

The experiments, designed to isolate the physics behind the process, are challenging at best. The setup requires two smooth, flat substrates, which are separated only by a few hundred nanometers, to remain perfectly parallel over distances of a centimeter or more.

Such an experimental setup presents several difficulties, including that "no substrate this size is truly flat," Troian says, "and even the world's smallest thermocouple is too large to fit inside the gap." In addition, she says, "the thermal gradient in the gap can exceed values of a million degrees per centimeter, so the setup undergoes significant expansion, distortion, and contraction during a typical run."

Transition between 3-D nanopillar arrays and striped structures in a polystyrene nanofilm subject to a thermal gradient of 105 degrees Celsius/cm.
Credit: Courtesy of E. McLeod and S. M. Troian, {LIS2T} lab/Caltech

In fact, all previous studies confronted similar challenges—leading to inaccurate estimates of the thermal gradient and the inability to view the formation and growth of the structures, among other problems. "To complicate matters," Troian says, "all of the previous data in the literature were obtained at very late stages of growth, far beyond the regime of validity of the theoretical models," Troian says.

The Caltech experiments solved these challenges by reverting to in situ measurements. The researchers replaced the top cold substrate with a transparent window fashioned from a single crystal sapphire, which permitted them to view directly the developing formations. They also used white light interferometry to help maintain parallelism during each run and to record the emerging shape and growth rate of emerging structures. Finite element simulations were also used to obtain much more accurate estimates of the thermal gradient in the tiny gap.

"When all is said and done, our results indicate that this formation process is not driven by electrostatic attraction between the film surface and the nearby substrate—similar to what happens when you run a comb through your hair—or pressure fluctuations inside the film from reflections of acoustic phonons—the collective excitations of molecules—as once believed, Troian explains. "The data simply don't fit these models, no matter how hard you try," she says. The data also did not seem to fit a third model based on film structuring by thermocapillary flow—the flow from warmer to cooler regions that accompanies surface temperature variations.

Troian proposed the thermocapillary model several years ago. Calculations for this "cold-seeking instability" suggest that nanofilms are always unstable in response to the formation of 3-D pillar arrays, regardless of the size of the thermal gradient. Tiny protrusions in the film experience a slightly cooler temperature than the surrounding liquid because of their proximity to a cold target. The surface tension of those tips is greater than that of the surrounding film. This imbalance generates a very strong surface force that "pulls" fluid up and "into the third dimension," she says. This process easily gives rise to large area arrays of dimples, ridges, pillars, and other shapes. A nonlinear version of the model suggests how cold pins can also be used to form more regular arrays.

Scanning electron micrograph of solidified protrusions in a 98 nm polystyrene film guided by a remote hexagonal array of cold pins.
Credit: Courtesy of E. McLeod and S. M. Troian, {LIS2T} lab/Caltech.

Troian was initially disappointed that the measurements did not match the theoretical predictions. For example, the prediction for the spacing between protrusions was off by a factor of two or more. "It occurred to me that certain properties of the nanofilm to be input into the model might be quite different than those literature values obtained from macroscopic samples," she notes.

She enlisted the advice of mechanical engineer Ken Goodson at Stanford, an expert on thermal transport in nanofilms, who confirmed that he'd also noticed a significant enhancement in the heat-transfer capability of certain nanofilms. Further investigation revealed that other groups around the world have begun reporting similar enhancement in optical and other characteristics of nanofilms. "And voila! … by adjusting one key parameter," Troian says, "we obtained perfect agreement between experiment and theory. How cool is that!"

Not satisfied by these findings, Troian wants to launch a separate study to find the source of these enhanced properties in nanofilms. "Now that our horizon is clear, I guarantee we won't sit still until we can fabricate some unusual components whose shape and optical response can only be formed by such a process."

The paper, "Experimental Verification of the Formation Mechanism for Pillar Arrays in Nanofilms Subject to Large Thermal Gradients," was coauthored by Euan McLeod and Yu Liu of Caltech. The work was funded by the National Science Foundation.

Kathy Svitil

Caltech Faculty Receive Early Career Grants

Four Caltech faculty members are among the 65 scientists from across the nation selected to receive five-year Early Career Research Awards from the U.S. Department of Energy (DOE). The grant winners, who were selected from a pool of about 1,150 applicants, are:

  • Guillaume Blanquart, assistant professor of mechanical engineering, who will develop a chemical model of the inner structure and of the formation of soot particles—black carbon particles formed during the incomplete combustion of hydrocarbon fuels that can cause health problems and adverse effects on the environment—that will aid the development of models that predict emissions from car and truck engines, aircraft engines, fires, and more.

  • Julia R. Greer, assistant professor of materials science and mechanics, who will use nanomechanical experimental and computational tools to isolate and understand the role of specific tailored interfaces and deformation mechanisms on the degradation of properties of materials subjected to helium irradiation. Elucidating these mechanisms will provide insight into requirements for advanced materials for current and next-generation nuclear reactors.

  • Chris Hirata, assistant professor of astrophysics, who will be conducting theoretical studies of cosmological observables—such as galaxy clustering—that are being used to probe dark energy and dark matter and to search for gravitational waves from inflation.

  • Ryan Patterson, assistant professor of physics, who will develop new techniques for readout, calibration, and particle identification for the NOvA long-baseline neutrino experiment at Fermilab, which will investigate neutrino oscillations—the conversion of neutrinos of one type (or "flavor") into another.

The Early Career Research Program, which is funded by the DOE's Office of Science, is "designed to bolster the nation's scientific workforce by providing support to exceptional researchers during the crucial early career years, when many scientists do their most formative work," according to the DOE announcement, and is intended to encourage scientists to focus on research areas that are considered high priorities for the Department of Energy.

To be eligible for an award, a researcher must have received a doctorate within the past 10 years and be an untenured, tenure-track assistant or associate professor at a U.S. academic institution or a full-time employee at a DOE national laboratory.

Kathy Svitil
Exclude from News Hub: 

Strong, Tough, and Now Cheap: Caltech Researchers Have New Way to Process Metallic Glass

PASADENA, Calif.—Stronger than steel or titanium—and just as tough—metallic glass is an ideal material for everything from cell-phone cases to aircraft parts. Now, researchers at the California Institute of Technology (Caltech) have developed a new technique that allows them to make metallic-glass parts utilizing the same inexpensive processes used to produce plastic parts. With this new method, they can heat a piece of metallic glass at a rate of a million degrees per second and then mold it into any shape in just a few milliseconds.

"We've redefined how you process metals," says William Johnson, the Ruben F. and Donna Mettler Professor of Engineering and Applied Science. "This is a paradigm shift in metallurgy." Johnson leads a team of researchers who are publishing their findings in the May 13 issue of the journal Science.

"We've taken the economics of plastic manufacturing and applied it to a metal with superior engineering properties,” he says. "We end up with inexpensive, high-performance, precision net-shape parts made in the same way plastic parts are made—but made of a metal that's 20 times stronger and stiffer than plastic.” A net-shape part is a part that has acquired its final shape.

Metallic glasses, which were first discovered at Caltech in 1960 and later produced in bulk form by Johnson's group in the early 1990s, are not transparent like window glass. Rather, they are metals with the disordered atomic structure of glass. While common glasses are generally strong, hard, and resistant to permanent deformation, they tend to easily crack or shatter. Metals tend to be tough materials that resist cracking and brittle fracture—but they have limited strength. Metallic glasses, Johnson says, have an exceptional combination of both the strength associated with glass and the toughness of metals.

A piece of metallic glass is heated and squished in just 10 milliseconds.
Credit: Georg Kaltenboeck

To make useful parts from a metallic glass, you need to heat the material until it reaches its glass-transition phase, at about 500–600 degrees C. The material softens and becomes a thick liquid that can be molded and shaped. In this liquid state, the atoms tend to spontaneously arrange themselves to form crystals. Solid glass is formed when the molten material refreezes into place before its atoms have had enough time to form crystals. By avoiding crystallization, the material keeps its amorphous structure, which is what makes it strong.

Common window glass and certain plastics take from minutes to hours—or longer—to crystallize in this molten state, providing ample time for them to be molded, shaped, cooled, and solidified. Metallic glasses, however, crystallize almost immediately once they are heated to the thick-liquid state. Avoiding this rapid crystallization is the main challenge in making metallic-glass parts.

Previously, metallic-glass parts were produced by heating the metal alloy above the melting point of the crystalline phase—typically over 1,000 degrees C. Then, the molten metal is cast into a steel mold, where it cools before crystallizing. But problems arise because the steel molds are usually designed to withstand temperatures of only around 600 degrees C. As a result, the molds have to be frequently replaced, making the process rather expensive. Furthermore, at 1,000 degrees C, the liquid is so fluid that it tends to splash and break up, creating parts with flow defects.

If the solid metallic glass is heated to about 500–600 degrees C, it reaches the same fluidity that liquid plastic needs to have when it's processed. But it takes time for heat to spread through a metallic glass, and by the time the material reaches the proper temperature throughout, it has already crystallized.

So the researchers tried a new strategy: to heat and process the metallic glass extremely quickly. Johnson's team discovered that, if they were fast enough, they could heat the metallic glass to a liquid state that's fluid enough to be injected into a mold and allowed to freeze—all before it could crystallize.

To heat the material uniformly and rapidly, they used a technique called ohmic heating. The researchers fired a short and intense pulse of electrical current to deliver an energy surpassing 1,000 joules in about 1 millisecond—about one megawatt of power—to heat a small rod of the metallic glass.

A piece of metallic glass being heated and squished in milliseconds, as seen in these infrared snapshots.
Credit: Joseph P. Schramm

The current pulse heats the entire rod—which was 4 millimeters in diameter and 2 centimeters long—at a rate of a million degrees per second. "We uniformly heat the glass at least a thousand times faster than anyone has before," Johnson says. Taking only about half a millisecond to reach the right temperature, the now-softened glass could be injected into a mold and cooled—all in milliseconds. To demonstrate the new method, the researchers heated a metallic-glass rod to about 550 degrees C and then shaped it into a toroid in less than 40 milliseconds. Despite being formed in open air, the molded toroid is free of flow defects and oxidation.

In addition, this process allows researchers to study these materials in their molten states, which was never before possible. For example, by heating the material before it can crystallize, researchers can examine the crystallization process itself on millisecond time scales. The new technique, called rapid discharge forming, has been patented and is being developed for commercialization, Johnson says. In 2010, he and his colleagues started a company, Glassimetal Technology, to commercialize novel metallic-glass alloys using this kind of plastic-forming technology.

The other authors on the Science paper, "Beating crystallization in glass-forming metals by millisecond heating and processing," are Caltech's Georg Kaltenboeck, Marios D. Demetriou, Joseph P. Schramm, Xiao Liu, Konrad Samwer (a visiting associate from the University of Gottingen, Germany), C. Paul Kim, and Douglas C. Hofmann. This research benefited from support by the II-VI Foundation.

Marcus Woo

Engineering Design Competition: "Extreme Recycling"

Congratulations to Chris Hallacy, Brad Saund, and Janet Chen for their victory March 8 in the 26th annual ME 72 engineering design competition. This year's theme: "Extreme Recycling." The mission: Design, build, and deploy two vehicles and traverse difficult terrain (water, sand, rocks, and wood chips, with one type of terrain in each of four different 6' x 10' boxes) to collect plastic water bottles, aluminum cans, and steel cans. During each five-minute round, the bots were to transport the recyclables and drop them (ideally, sorted by type, and—in the case of aluminum cans—crushed to less than half of their vertical height) into recycling bins, before scurrying back to the starting zone.

Twenty weeks earlier, at the start of ME 72—Caltech's undergraduate engineering design laboratory class—students were given a budget (ultimately $1200, of which up to $800 could be spent in the Caltech stockrooms) to purchase whatever they needed to build their bots. The ultimate designs followed a few basic themes: bots with scoopers and grippers, to grab the bottles and cans, and bots with baskets, to haul the loot. Other design features included ramps to wedge under opponent bots and trip them up.

Hallacy, Saund, and Chen—a.k.a. team "BRB"—bested five other teams without dropping a heat during the double-elimination contest. In the final round, against team Wall-E—headed by Keir Gonyea, Chris Pombrol, Allen Chen, and Gerardo Morabito—BRB scored first, delivering a plastic bottle to the recycling bin, and hung on (including by literally hanging on to one of the Wall-E bots) for the win.

Kathy Svitil
Exclude from News Hub: 
News Type: 
In Our Community