Nano Insights Could Lead to Improved Nuclear Reactors

Caltech researchers examine self-healing abilities of some materials

PASADENA, Calif.—In order to build the next generation of nuclear reactors, materials scientists are trying to unlock the secrets of certain materials that are radiation-damage tolerant. Now researchers at the California Institute of Technology (Caltech) have brought new understanding to one of those secrets—how the interfaces between two carefully selected metals can absorb, or heal, radiation damage.

"When it comes to selecting proper structural materials for advanced nuclear reactors, it is crucial that we understand radiation damage and its effects on materials properties. And we need to study these effects on isolated small-scale features," says Julia R. Greer, an assistant professor of materials science and mechanics at Caltech. With that in mind, Greer and colleagues from Caltech, Sandia National Laboratories, UC Berkeley, and Los Alamos National Laboratory have taken a closer look at radiation-induced damage, zooming in all the way to the nanoscale—where lengths are measured in billionths of meters. Their results appear online in the journals Advanced Functional Materials and Small.

During nuclear irradiation, energetic particles like neutrons and ions displace atoms from their regular lattice sites within the metals that make up a reactor, setting off cascades of collisions that ultimately damage materials such as steel. One of the byproducts of this process is the formation of helium bubbles. Since helium does not dissolve within solid materials, it forms pressurized gas bubbles that can coalesce, making the material porous, brittle, and therefore susceptible to breakage.  

Some nano-engineered materials are able to resist such damage and may, for example, prevent helium bubbles from coalescing into larger voids. For instance, some metallic nanolaminates—materials made up of extremely thin alternating layers of different metals—are able to absorb various types of radiation-induced defects at the interfaces between the layers because of the mismatch that exists between their crystal structures.

"People have an idea, from computations, of what the interfaces as a whole may be doing, and they know from experiments what their combined global effect is. What they don't know is what exactly one individual interface is doing and what specific role the nanoscale dimensions play," says Greer. "And that's what we were able to investigate."

Peri Landau and Guo Qiang, both postdoctoral scholars in Greer's lab at the time of this study, used a chemical procedure called electroplating to either grow miniature pillars of pure copper or pillars containing exactly one interface—in which an iron crystal sits atop a copper crystal. Then, working with partners at Sandia and Los Alamos, in order to replicate the effect of helium irradiation, they implanted those nanopillars with helium ions, both directly at the interface and, in separate experiments, throughout the pillar.

The researchers then used a one-of-a-kind nanomechanical testing instrument, called the SEMentor, which is located in the subbasement of the W. M. Keck Engineering Laboratories building at Caltech, to both compress the tiny pillars and pull on them as a way to learn about the mechanical properties of the pillars—how their length changed when a certain stress was applied, and where they broke, for example. 

"These experiments are very, very delicate," Landau says. "If you think about it, each one of the pillars—which are only 100 nanometers wide and about 700 nanometers long—is a thousand times thinner than a single strand of hair. We can only see them with high-resolution microscopes."

The team found that once they inserted a small amount of helium into a pillar at the interface between the iron and copper crystals, the pillar's strength increased by more than 60 percent compared to a pillar without helium. That much was expected, Landau explains, because "irradiation hardening is a well-known phenomenon in bulk materials." However, she notes, such hardening is typically linked with embrittlement, "and we do not want materials to be brittle."

Surprisingly, the researchers found that in their nanopillars, the increase in strength did not come along with embrittlement, either when the helium was implanted at the interface, or when it was distributed more broadly. Indeed, Greer and her team found, the material was able to maintain its ductility because the interface itself was able to deform gradually under stress.

This means that in a metallic nanolaminate material, small helium bubbles are able to migrate to an interface, which is never more than a few tens of nanometers away, essentially healing the material. "What we're showing is that it doesn't matter if the bubble is within the interface or uniformly distributed—the pillars don't ever fail in a catastrophic, abrupt fashion," Greer says. She notes that the implanted helium bubbles—which are described in the Advanced Functional Materials paper—were one to two nanometers in diameter; in future studies, the group will repeat the experiment with larger bubbles at higher temperatures in order to represent additional conditions related to radiation damage.

In the Small paper, the researchers showed that even nanopillars made entirely of copper, with no layering of metals, exhibited irradiation-induced hardening. That stands in stark contrast to the results from previous work by other researchers on proton-irradiated copper nanopillars, which exhibited the same strengths as those that had not been irradiated. Greer says that this points to the need to evaluate different types of irradiation-induced defects at the nanoscale, because they may not all have the same effects on materials.

While no one is likely to be building nuclear reactors out of nanopillars anytime soon, Greer argues that it is important to understand how individual interfaces and nanostructures behave. "This work is basically teaching us what gives materials the ability to heal radiation damage—what tolerances they have and how to design them," she says. That information can be incorporated into future models of material behavior that can help with the design of new materials.

Along with Greer, Landau, and Qiang, Khalid Hattar of Sandia National Laboratories is also a coauthor on the paper "The Effect of He Implantation on the Tensile Properties and Microstructure of Cu/Fe Nano-bicrystals," which appears online in Advanced Functional Materials. Peter Hosemann of UC Berkeley and Yongqiang Wang of Los Alamos National Laboratory are coauthors on the paper "Helium Implantation Effects on the Compressive Response of Cu Nanopillars," which appears online in the journal Small. The work was supported by the U.S. Department of Energy and carried out, in part, in the Kavli Nanoscience Institute at Caltech.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

A Fresh Look at Psychiatric Drugs

Caltech researchers propose a new approach to understanding common treatments

Drugs for psychiatric disorders such as depression and schizophrenia often require weeks to take full effect. "What takes so long?" has formed one of psychiatry's most stubborn mysteries. Now a fresh look at previous research on quite a different drug—nicotine—is providing answers. The new ideas may point the way toward new generations of psychiatric drugs that work faster and better.

For several years, Henry Lester, Bren Professor of Biology at Caltech, and his colleagues have worked to understand nicotine addiction by repeatedly exposing nerve cells to the drug and studying the effects. At first glance, it's a simple story: nicotine binds to, and activates, specific nicotine receptors on the surface of nerve cells within a few seconds of being inhaled. But nicotine addiction develops over weeks or months; and so the Caltech team wanted to know what changes in the nerve cell during that time, hidden from view.

The story that developed is that nicotine infiltrates deep into the cell, entering a protein-making structure called the endoplasmic reticulum and increasing its output of the same nicotine receptors. These receptors then travel to the cell's surface. In other words, nicotine acts "inside out," directing actions that ultimately fuel and support the body's addiction to nicotine.

"That nicotine works 'inside out' was a surprise a few years ago," says Lester. "We originally thought that nicotine acted only from the outside in, and that a cascade of effects trickled down to the endoplasmic reticulum and the cell's nucleus, slowly changing their function."

In a new research review paper, published in Biological Psychiatry, Lester—along with senior research fellow Julie M. Miwa and postdoctoral scholar Rahul Srinivasan—proposes that psychiatric medications may work in the same "inside-out" fashion—and that this process explains how it takes weeks rather than hours or days for patients to feel the full effect of such drugs.

"We've known what happens within minutes and hours after a person takes Prozac, for example," explains Lester. "The drug binds to serotonin uptake proteins on the cell surface, and prevents the neurotransmitter serotonin from being reabsorbed by the cell. That's why we call Prozac a selective serotonin reuptake inhibitor, or SSRI." While the new hypothesis preserves that idea, it also presents several arguments for the idea that the drugs also enter into the bodies of the nerve cells themselves.

There, the drugs would enter the endoplasmic reticulum similarly to nicotine and then bind to the serotonin uptake proteins as they are being synthesized. The result, Lester hypothesizes, is a collection of events within neurons that his team calls "pharmacological chaperoning, matchmaking, escorting, and abduction." These actions—such as providing more stability for various proteins—could improve the function of those cells, leading to therapeutic effects in the patient. But those beneficial effects would occur only after the nerve cells have had time to make their intracellular changes and to transport those changes to the ends of axons and dendrites.

"These 'inside-out' hypotheses explain two previously mysterious actions," says Lester. "On the one hand, the ideas explain the long time required for the beneficial actions of SSRIs and antischizophrenic drugs. But on the other hand, newer, and very experimental, antidepressants act within hours. Binding within the endoplasmic reticulum of dendrites, rather than near the nucleus, might underlie those actions."

Lester and his colleagues first became interested in nicotine's effects on neural disorders because of a striking statistic: a long-term tobacco user has a roughly twofold lower chance of developing Parkinson's disease. Because there is no medical justification for using tobacco, Lester's group wanted more information about this inadvertent beneficial action of nicotine. They knew that stresses on the endoplasmic reticulum, if continued for years, could harm a cell. Earlier this year, they reported that nicotine's "inside-out" action appears to reduce endoplasmic reticulum stress, which could prevent or forestall the onset of Parkinson's disease.

Lester hopes to test the details of "inside-out" hypotheses for psychiatric medication. First steps would include investigating the extent to which psychiatric drugs enter cells and bind to their nascent receptors in the endoplasmic reticulum. The major challenge is to discover which other proteins and genes, in addition to the targets, participate in "matchmaking, escorting, and abduction."

"Present-day psychiatric drugs have a lot of room for improvement," says Lester. "Systematic research to produce better psychiatric drugs has been hampered by our ignorance of how they work. If the hypotheses are proven and the intermediate steps clarified, it may become possible to generate better medications."

 

Writer: 
Caltech Communications
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

How I Landed on Mars

Caltech graduate students on the MSL mission drive science, the rover, and their careers

Caltech geology graduate student Katie Stack says her Caltech experience has provided her with the best of both worlds. Literally.

As one of five Caltech graduate students currently staffing the Mars Science Laboratory mission, Stack is simultaneously exploring the geologic pasts of both Mars and Earth. She and her student colleagues apply their knowledge of Earth's history and environment—gleaned from Caltech classes and field sites across the globe—to the analysis of Curiosity's discoveries as well as the hunt for evidence of past life on the Red Planet.

"Mars exploration is that perfect combination of understanding what is close to home and far afield," says Stack, who studies sedimentology and stratigraphy in the lab of John Grotzinger, the mission's project scientist and Caltech's Fletcher Jones Professor of Geology.

"The mission is providing a different perspective for seeing the world . . . as well as for seeing myself," she adds. "As a graduate student, you often struggle with your place in your academic community, and taking part in the mission is one of the ways that we are just thrown into the mix. We are working on the same level as a bunch of senior scientists, who have a lot of experience, and yet they are asking us questions—seeking our expertise. That's an experience you don't often get to have."

Caltech's graduate student participants on the MSL—who include Stack, Kirsten Siebach, Lauren Edgar, Jeff Marlow, and Hayden Miller, all from the Division of Geological and Planetary Sciences—represent the largest contingent of students from any one institution in a mission that has more than 400 participating scientists. Caltech's strong student presence is aided in large part by the leadership role that faculty are playing in the mission as well as the Institute's close proximity to mission control at the Jet Propulsion Laboratory (JPL), which Caltech manages for NASA.

Caltech's graduate students are among the mission personnel responsible for sequencing the scientific plan and programming the rover each day, as well as for documenting the scientific discussion and decisions at each step of the mission. As the mission's blogger, Marlow also helps share the science team's work with the public.

"The graduate students are the heart of the mission," Grotzinger says. "They are the keepers of the plan and are able to efficiently operate the technology to run the rover every day, especially when senior scientists are unable to do so."

Making the science plans for the rover, says graduate student Kirsten Siebach, is "as close as I get to driving the rover. I can help program it to take pictures, analyze samples, and shoot the ChemCam laser."

"It's always fun when something that I helped command the rover to do, like take a picture, ends up making the news," she adds. "I helped command it to take one such picture of the Hottah outcrop that showed evidence of an ancient streambed."

In addition to staffing operations for the mission, Caltech's students are also key contributors to the scientific analysis of the data and help make decisions about where Curiosity goes.

Before Curiosity landed, for instance, Stack and Lauren Edgar, helped compile a geologic map of the Gale Crater landing ellipse, using orbital images to identify the geologic diversity and relationships among rocks. Their work has continued to serve as a "road map" for the rover's research. Meanwhile, Siebach has been exploring the history of water on Mars, looking at the geomorphology of channel structures and fractures on the planet.

"We really have grown up in the golden age of Mars exploration," Edgar says, noting that while at Caltech, she's had the opportunity to contribute to three Mars rover missions—Spirit, Opportunity, and now Curiosity. "They just keep getting better and better."

In addition to the graduate students, several undergraduate students have taken part in the mission, participating through Caltech's Summer Undergraduate Research Fellowships (SURF) program. This past summer, a student working with Bethany Ehlmann, an assistant professor of planetary science at Caltech and an MSL science team member, helped to characterize and classify hundreds of Earth rock samples for potential comparison with Mars specimens that will be analyzed by the ChemCam instrument. Meanwhile, over the past two summers, Solomon Chang, a Caltech sophomore studying computer science, worked with JPL engineers to model Curiosity's mobility to ensure that it would actually move on Mars as it had been programmed to do.

Those summer projects have ended, but for the Caltech grad students on the MSL team, the work continues. Indeed, says Grotzinger, because many of the mission's scientists will be leaving Pasadena to return to their home institutions during the coming months, the grad students will be called upon to fill additional roles in the rover's daily operation and science.

"One of the great things about working on a mission as a student is that science is a fairly merit-driven process," says Ehlmann, who participated in Mars exploration missions as both an undergraduate and graduate student. "So if you have a good idea and you are there, you can contribute to deciding what measurements to make, can develop hypothesis about what's going on. It's a very inspiring and empowering experience."

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Developmental Bait and Switch

Caltech-led team discovers enzyme responsible for neural crest cell development

PASADENA, Calif.—During the early developmental stages of vertebrates—animals that have a backbone and spinal column, including humans—cells undergo extensive rearrangements, and some cells migrate over large distances to populate particular areas and assume novel roles as differentiated cell types. Understanding how and when such cells switch their purpose in an embryo is an important and complex goal for developmental biologists. A recent study, led by researchers at the California Institute of Technology (Caltech), provides new clues about this process—at least in the case of neural crest cells, which give rise to most of the peripheral nervous system, to pigment cells, and to large portions of the facial skeleton.

"There has been a long-standing mystery regarding why some cells in the developing embryo start out as part of the future central nervous system, but leave to populate peripheral parts of the body," says Marianne Bronner, the Albert Billings Ruddock Professor of Biology at Caltech and corresponding author of the paper, published in the November 1 issue of the journal Genes & Development. "In this paper, we find that an important type of enzyme called DNA-methyltransferase, or DNMT, acts as a switch, determining which cells will remain part of the central nervous system, and which will become neural crest cells."

According to Bronner, DNMT arranges this transition by silencing expression of the genes that promote central nervous system (CNS) identity, thereby giving the cells the green light to become neural crest, migrate, and do new things, like help build a jaw bone. The team came to this conclusion after analyzing the actions of one type of DNMT—DNMT3A—at different stages of development in a chicken embryo.

This is important, says Bronner, because while most scientists who study the function of DNMTs use embryonic stem cells that can be maintained in culture, her team is "studying events that occur in living embryos as opposed to cells grown under artificial conditions," she explains.

"It is somewhat counterintuitive that this kind of shutting off of genes is essential for promoting neural crest cell fate," she says. "Embryonic development often involves switches in the types of inputs that a cell receives. This is an example of a case where a negative factor must be turned off—essentially a double negative—in order to achieve a positive outcome."

Bronner says it was also surprising to see that an enzyme like DNMT has such a specific function at a specific time. DNMTs are sometimes thought to act in every cell, she says, yet the researchers have discovered a function for this enzyme that is exquisitely controlled in space and time.

"It is amazing how an enzyme, at a given time point during development, can play such a specific role of making a key developmental decision within the embryo," says Na Hu, a graduate student in Bronner's lab and lead author of the paper. "Our findings can be applied to stem cell therapy, by giving clues about how to engineer other cell types or stem cells to become neural crest cells."

Bronner points out that their work relates to the discovery, which won a recent Nobel Prize in Medicine or Physiology, that it is possible to "reprogram" cells taken from adult tissue. These induced pluripotent stem (iPS) cells are similar to embryonic stem cells, and many investigators are attempting to define the conditions needed for them to differentiate into particular cell types, including neural crest derivatives.

"Our results showing that DNMT is important for converting CNS cells to neural crest cells will be useful in defining the steps needed to reprogram such iPS cells," she says. "The iPS cells may in turn be useful for repair in human diseases such as familial dysautonomia, a disease in which there is depletion of autonomic and sensory neurons that are neural crest–derived; for repair of jaw bones lost in osteonecrosis; and for many other potential treatments."

In the short term, the team will explore the notion that DNMT enzymes may have different functions in the embryo at different places and times. That's why the next step in their research, says Bronner, is to examine the later role of these enzymes in nervous-system development, like whether or not they effect the length of time during which the CNS is able to produce neural crest cells.

Additional authors on the paper, titled "DNA methyltransferase3A as a molecular switch mediating the neural tube-to-neural crest fate transition," are Pablo Strobl-Mazzulla from the Laboratorio de Biología del Desarrollo in Chascomús, Argentina, and Tatjana Sauka-Spengler from the Weatherall Institute of Molecular Medicine at the University of Oxford. The work was supported by the National Institutes of Health and the United States Public Health Service.

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Progress for Paraplegics

Caltech investigators expand project to restore functions to people with spinal-cord injuries

In May 2011, a new therapy created in part by Caltech engineers enabled a paraplegic man to stand and move his legs voluntarily. Now those same researchers are working on a way to automate their system, which provides epidural electrical stimulation to the lower spinal cord. Their goal is for the system to soon be made available to rehab clinics—and thousands of patients—worldwide.

That first patient—former athlete Rob Summers, who had been completely paralyzed below the chest following a 2006 accident—performed remarkably well with the electromechanical system. Although it wasn't initially part of the testing protocol established by the Food and Drug Administration, the FDA allowed Summers to take the entire system with him when he left the Frazier Rehab Institute in Louisville—where his postsurgical physical therapy was done—provided he returns every three months for a checkup.

Joel Burdick, the Richard L. and Dorothy M. Hayman Professor of Mechanical Engineering and Bioengineering at Caltech, and Yu-Chong Tai, a Caltech professor of electrical engineering and mechanical engineering, helped create the therapy, which involves the use of a sheetlike array of electrodes that stimulate Summers' neurons and thus activate the circuits in his lower spinal cord that control standing and stepping. The approach has subsequently been successfully tested on a second paraplegic, and therapists are about to finish testing a third subject, who has shown positive results.

But Tai and Burdick want to keep the technology, as well as the subjects, moving forward. To that end, Tai is developing new versions of the electrode array currently approved for human implantation; these will improve patients' stepping motions, among other advances, and they will be easier to implant. Burdick is also working on a way to let a computer control the pattern of electrical stimulation applied to the spinal cord.

"We need to go further," Burdick says. "And for that, we need new technology."

Because spinal-cord injuries vary from patient to patient, deploying the system has required constant individualized adjustments by clinicians and researchers at the Frazier Institute, a leading center for spinal-cord rehabilitation. "Right now there are 16 electrodes in the array, and for each individual electrode, we send a pulse, which can be varied for amplitude and frequency to cause a response in the patient," Burdick says. Using the current method, he notes, "it takes substantial effort to test all the variables to find the optimum setting for a patient for each of the different functions we want to activate."

The team of investigators, which also includes researchers from UCLA and the University of Louisville, has until now used intelligent guesswork to determine which stimuli might work best. But soon, using a new algorithm developed by Burdick, they will be able to rely on a computer to determine the optimum stimulation levels, based on the patient's response to previous stimuli. This would allow patients to go home after the extensive rehab process with a system that could be continually adjusted by computer—saving Summers and the other patients many of those inconvenient trips back to Louisville. Doctors and technicians could monitor patients' progress remotely.

In addition to providing the subjects with continued benefits from the use of the device, there are other practical reasons for wanting to automate the system. An automated system would be easier to share with other hospitals and clinics around the world, Burdick says, and without a need for intensive training, it could lower the cost.

The FDA has approved testing the system in five spinal-cord injury patients, including the three already enrolled in the trial; Burdick is planning to test the new computerized version in the fourth patient, as well as in Rob Summers during 2013. Once the investigators have completed testing on all five patients, Burdick says, the team will spend time analyzing the data before deciding how to improve the system and expand its use.

The strategy is not a cure for paraplegics, but a tool that can be used to help improve the quality of their health, Burdick says. The technology could also complement stem-cell therapies or other methods that biologists are working on to repair or circumvent the damage to nervous systems that results from spinal-cord injury.

"There's not going to be one silver bullet for treating spinal-cord injuries," Burdick says. "We think that our technique will play a role in the rehabilitation of spinal-cord injury patients, but a more permanent cure will likely come from biological solutions."

Even with the limitations of the current system, Burdick says, the results have exceeded his expectations.

"All three subjects stood up within 48 hours of turning on the array," Burdick says. "This shows that the first patient wasn't a fluke, and that many aspects of the process are repeatable." In some ways, the second and third patients are performing even better than Summers, though it will be some time before the team can fully analyze those results. "We were expecting variations because of the distinct differences in the patients' injuries. Rob gave us a starting point, and now we've learned how to tune the array for each patient and to make adjustments as each patient changes over time.

"I do this work because I love it," Burdick says. "When you work with these people and get to know them and see how they are improving, it's personally inspiring."

Images: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Reconsidering the Global Thermostat

Caltech researcher and colleagues show outcome of geoengineering can be tunable

PASADENA, Calif.—From making clouds whiter and injecting aerosols into the stratosphere, to building enormous sunshades in space, people have floated many ideas about how the planet's climate could be manipulated to counteract the effects of global warming—a concept known as geoengineering. Many of the ideas involve deflecting incoming sunlight before it has a chance to further warm the earth. Because this could affect areas of the planet inequitably, geoengineering has often raised an ethical question: Whose hand would control the global thermostat?

Now a team of researchers from the California Institute of Technology (Caltech), Harvard University, and the Carnegie Institution says there doesn't have to be just a single global control. Using computer modeling, they have shown that varying the amount of sunlight deflected away from the earth by season and by region can significantly improve the parity of the situation. The results appear in an advance online publication of the journal Nature Climate Change.

Previous geoengineering studies have typically assumed uniform deflection of sunlight everywhere on the planet. But the pattern of temperature and precipitation effects that would result from such efforts would never compensate perfectly for the complex pattern of changes that have resulted from global warming. Some areas would end up better off than others, and the climate effects are complex. For example, as the planet warms, the poles are heating up more than the tropics. However, in models where sunlight is deflected uniformly, when enough sunlight is redirected to compensate for this polar warming, the tropics end up colder than they were before man-made activities pumped excess carbon dioxide into the atmosphere.

In the new study, the researchers worked with a climate model of relatively coarse resolution. Rather than selecting one geoengineering strategy, they mimicked the desired effect of many projects by simply "turning down the sun"—decreasing the amount of sunlight reaching the planet. Instead of turning down the sun uniformly, they tailored when and where they reduced incoming sunlight, looking at 15 different combinations. In one, for example, they turned down the sun between January and March while also turning it down more at the poles than at the tropics.

"That essentially gives us 15 knobs that we can tune in order to try to minimize effects at the worst-off regions on the planet," says Doug MacMartin, a senior research associate at Caltech and lead author of the new paper. "In our model, we were able to reduce the residual climate changes (after geoengineering) in the worst-off regions by about 30 percent relative to what could be achieved using a uniform reduction in sunlight."

The group also found that by varying where and when sunlight was reduced, they needed to turn down the sun just 70 percent as much as they would in uniform reflectance to get a similar result. "Based on this work, it's at least plausible that there are ways that you could implement a geoengineering solution that would have less severe consequences, such as a reduced impact on ozone," MacMartin says.

The researchers also used the tuning approach to focus on recovering Arctic sea ice. In their model, it took five times less solar reduction than in the uniform reflectance models to recover the Arctic sea ice to the extent typical of pre-Industrial years.

"These results indicate that varying geoengineering efforts by region and over different periods of time could potentially improve the effectiveness of solar geoengineering and reduce climate impacts in at-risk areas," says Ken Caldeira of the Carnegie Institution. "For example, these approaches may be able to reverse long-term changes in the Arctic sea ice."

The group acknowledges that geoengineering ideas are untested and could come with serious consequences, such as making the skies whiter and depleting the ozone layer, not to mention the unintended consequences that tend to arise when dealing with such a complicated system as the planet. They also say that the best solution would be to reduce greenhouse gas emissions. "I'm approaching it as an engineering problem," MacMartin says. "I'm interested in whether we can come up with a better way of doing the geoengineering that minimizes the negative consequences."  

In addition to MacMartin and Caldeira, David Keith of Harvard University and Ben Kravitz, formerly of the Carnegie Institution but now at the DOE's Pacific Northwest National Lab, are also coauthors on the paper, "Management of trade-offs in geoengineering through optimal choice of non-uniform radiative forcing."

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Technology Has Improved Voting Procedures

New report assesses voting procedures over the last decade

PASADENA, Calif.—Thanks to better voting technology over the last decade, the country's election process has seen much improvement, according to a new report released today by researchers at Caltech and MIT. However, the report notes, despite this progress, some problems remain.

Spurred by the debacle of hanging chads and other voting problems during the 2000 presidential election, the Voting Technology Project (VTP) was started by Caltech and MIT to bring together researchers from across disciplines to figure out how to improve elections. The VTP issued its first report in 2001.

"Since that report came out and since our project was formed, a lot of progress has been made in improving how American elections are run," says Michael Alvarez, professor of political science at Caltech and codirector of the VTP.

For example, the report found that getting rid of outdated voting machines has caused a drop in the number of votes lost to ballot errors. To assess how many votes are lost in each election due to voting mistakes, the researchers calculate the number of residual votes—or the difference between the number of votes that are counted for a particular office and the total number of votes cast. If there are no voting errors, there should be no residual votes.

In their first report in 2001, the researchers found that older voting technology—like punch cards—led to a high residual vote rate. But their new research now shows that the rate has dropped. In particular, Charles Stewart III, a professor of political science at MIT and the other codirector of the VTP, and his colleagues found that the total number of residual votes decreased from 2 percent in 2000 to 1 percent in 2006 and 2008, meaning that fewer votes were lost due to voting errors. The drop was greater in states that instituted more modern voting technology.

"As we moved away from punch cards, lever machines, and paper ballots and towards optical scan systems and electronic systems that have voter verification, we have seen the voter residual rate plummet," Alvarez says. Voter-verification technology gives voters immediate feedback if they make a mistake—by filling in a circle incorrectly, for example—and a chance to correct their error to ensure that their votes are counted.

In addition, the report urges officials to continue and expand election auditing to study the accuracy of registration and voting procedures. For example, after an election, officials can recount ballots to make sure the electronic ballot counters are accurate. "Postelection ballot auditing is a great idea and states need to continue their efforts to use those election ballot-auditing procedures to increase the amount of confidence and integrity of elections," Alvarez says.

The researchers also describe concern with the rise of absentee and early voting, since voter verification is much harder to do via mail. Unlike with in-person voting, these methods offer no immediate feedback about whether a ballot was filled out correctly or if it got counted at all. Once you put your ballot in the mailbox, it's literally out of your hands.

The report also weighs in on voter-identification laws, which have been proposed in many states and subsequently challenged in court. Proponents say they are necessary to prevent voter fraud while opponents argue that there is little evidence that such fraud exists. Moreover, opponents say, voter identification laws make it much more difficult for people without government-issued IDs to vote. But, the report says, technology may resolve the conflict.

"Technology may help ensure voter authentication while alleviating or mitigating the costs that are imposed on voters by laws requiring state-issued identification," says Jonathan Katz, the Kay Sugahara Professor of Social Sciences and Statistics and coauthor of the VTP report.

For example, polling places can have access to a database of registered voters that is also linked to the state's database of DMV photos. A voter's identification can then be confirmed without them having to carry a photo ID. For voters who do not have an ID, the polling place can be equipped with a camera to take an ID picture immediately. The photo can then be entered into the database to verify identification in future elections.

Click here to read the complete report and learn more about the VTP.

In addition to Alvarez, Stewart, and Katz, the other authors of the Caltech/MIT VTP report are Stephen Ansolabehere of Harvard, Thad Hall of the University of Utah, and Ronald Rivest of MIT. The report was supported by the Carnegie Corporation of New York. The project has been supported by the John S. and James L. Knight Foundation and the Pew Charitable Trusts.

Writer: 
Marcus Woo
Writer: 

Caltech Modeling Feat Sheds Light on Protein Channel's Function

PASADENA, Calif.—Chemists at the California Institute of Technology (Caltech) have managed, for the first time, to simulate the biological function of a channel called the Sec translocon, which allows specific proteins to pass through membranes. The feat required bridging timescales from the realm of nanoseconds all the way up to full minutes, exceeding the scope of earlier simulation efforts by more than six orders of magnitude. The result is a detailed molecular understanding of how the translocon works.

Modeling behavior across very different timescales is a major challenge in modern simulation research. "Computer simulations often provide almost uselessly detailed information on a timescale that is way too short, from which you get a cartoon, or something that might raise as many questions as it answers," says Thomas Miller, an assistant professor of chemistry at Caltech. "We've managed to go significantly beyond that, to create a tool that can actually be compared against experiments and even push experiments—to predict things that they haven't been able to see."

The new computational model and the findings based on its results are described by Miller and graduate student Bin Zhang in the current issue of the journal Cell Reports.

The Sec translocon is a channel in cellular membranes involved in the targeting and delivery of newly made proteins. Such channels are needed because the proteins that are synthesized at ribosomes must travel to other regions of the cell or outside the cell in order to perform their functions; however, the cellular membranes prevent even the smallest of molecules, including water, from passing through them willy-nilly. In many ways, channels such as the Sec translocon serve as gatekeepers—once the Sec translocon determines that a given protein should be allowed to pass through, it opens up and allows the protein to do one of two things: to be integrated into the membrane, or to be secreted completely out of the cell.

Scientists have disagreed about how the fate of a given protein entering the translocon is determined. Based on experimental evidence, some have argued that a protein's amino-acid sequence is what matters—that is, how many of its amino acids interact favorably with water and how many clash. This argument treats the process as one in equilibrium, where the extremely slow rate at which a ribosome adds proteins to the channel can be considered infinitely slow.  Other researchers have shown that slowing down the rate of protein insertion into the channel actually changes the outcome, suggesting that kinetic effects can also play a role.

"There was this equilibrium picture, suggesting that only the protein sequence is really important. And then there was an alternative picture, suggesting that kinetic effects are critical to understanding the translocon," Miller says. "So we wondered, could both pictures, in some sense, be right? And that turns out to be the case."

In 2010 and earlier this year, Miller and Zhang published papers in the Proceedings of the National Academy of Sciences and the Journal of the American Chemical Society describing atomistic simulations of the Sec translocon. These computer simulations attempt to account for every motion of every single atom in a system—and typically require so much computing time that they can only model millionths of seconds of activity, at most. Meanwhile, actual biological processes involving proteins in the translocon last many seconds or minutes.

Miller and Zhang were able to use their atomistic simulations to determine which parts of the translocon are most important and to calculate how much energy it costs those parts to move in ways that allow proteins to pass through. In this way, they were able to build a simpler version of the simulation that modeled important groupings of atoms, rather than each individual atom. Using the simplified simulation, they could simulate the translocon's activity over the course of more than a minute.

The researchers ran that simplified model tens of thousands of times and observed the different ways in which proteins move through the channel. In the simulation, any number of variables could be changed—including the protein's amino-acid sequence, its electronic charge, the rate at which it is inserted into the translocon, the length of its tail, and more. The effect of these alterations on the protein's fate was then studied, revealing that proteins move so slowly within the tightly confined environment of the translocon that the pace at which they are added to the channel during translation—a process that might seem infinitely slow—can become important. At the same time, Miller and Zhang saw that other relatively fast processes give rise to the results associated with the equilibrium behavior.

"In fact, both equilibrium and kinetically controlled processes are happening—but in a way that was not obvious until we could actually see everything working together," Miller says.

Beyond elucidating how the translocon works and reconciling seemingly disparate experimental results, the new simulation also lets the researchers perform experiments computationally that have yet to be tried in the lab. For example, they have run simulations with longer proteins and observed that at such lengths—unlike what has been seen with shorter proteins—the equilibrium picture begins to be affected by kinetic effects.  "This could bring the two experimental camps together, and to have led that would be kind of exciting," Miller says.

The new Cell Reports paper is titled "Long-timescale dynamics and regulation of Sec-facilitated protein translocation." The work was supported by the U.S. Office of Naval Research and the Alfred P. Sloan Foundation, with computational resources provided by the U.S. Department of Energy, the National Science Foundation, and the National Institute of General Medical Sciences.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Developing the Next Generation of Microsensors

Caltech researchers engineer microscale optical accelerometer

PASADENA, Calif.—Imagine navigating through a grocery store with your cell phone. As you turn down the bread aisle, ads and coupons for hot dog buns and English muffins pop up on your screen. The electronics industry would like to make such personal navigators a reality, but, to do so, they need the next generation of microsensors.

Thanks to an ultrasensitive accelerometer—a type of motion detector—developed by researchers at the California Institute of Technology (Caltech) and the University of Rochester, this new class of microsensors is a step closer to reality. Beyond consumer electronics, such sensors could help with oil and gas exploration deep within the earth, could improve the stabilization systems of fighter jets, and could even be used in some biomedical applications where more traditional sensors cannot operate.

Caltech professor of applied physics Oskar Painter and his team describe the new device and its capabilities in an advance online publication of the journal Nature Photonics.

Rather than using an electrical circuit to gauge movements, their accelerometer uses laser light. And despite the device's tiny size, it is an extremely sensitive probe of motion. Thanks to its low mass, it can also operate at a large range of frequencies, meaning that it is sensitive to motions that occur in tens of microseconds, thousands of times faster than the motions that the most sensitive sensors used today can detect.

"The new engineered structures we made show that optical sensors of very high performance are possible, and one can miniaturize them and integrate them so that they could one day be commercialized," says Painter, who is also codirector of Caltech's Kavli Nanoscience Institute.

Although the average person may not notice them, microchip accelerometers are quite common in our daily lives. They are used in vehicle airbag deployment systems, in navigation systems, and in conjunction with other types of sensors in cameras and cell phones. They have successfully moved into commercial use because they can be made very small and at low cost.

Accelerometers work by using a sensitive displacement detector to measure the motion of a flexibly mounted mass, called a proof mass. Most commonly, that detector is an electrical circuit. But because laser light is one of the most sensitive ways to measure position, there has been interest in making such a device with an optical readout. For example, projects such as the Laser Interferometer Gravitational-Wave Observatory (LIGO) rely on optical interferometers, which use laser light reflecting off mirrors separated by kilometers of distance to sensitively measure relative motion of the end mirrors. Lasers can have very little intrinsic noise—meaning that their intensity fluctuates little—and are typically limited by the quantum properties of light itself, so they make it much easier to detect very small movements.

People have tried, with limited success, to make miniature versions of these large-scale interferometers. One stumbling block for miniaturization has been that, in general, the larger the proof mass, the larger the resulting motion when the sensor is accelerated. So it is typically easier to detect accelerations with larger sensors. Also, when dealing with light rather than electrons—as in optical accelerometers—it is a challenge to integrate all the components (the lasers, detectors, and interferometer) into a micropackage.

"What our work really shows is that we can take a silicon microchip and scale this concept of a large-scale optical interferometer all the way down to the nanoscale," Painter says. "The key is this little optical cavity we engineered to read out the motion."

The optical cavity is only about 20 microns (millionths of a meter) long, a single micron wide, and a few tenths of a micron thick. It consists of two silicon nanobeams, situated like the two sides of a zipper, with one side attached to the proof mass. When laser light enters the system, the nanobeams act like a "light pipe," guiding the light into an area where it bounces back and forth between holes in the nanobeams. When the tethered proof mass moves, it changes the gap between the two nanobeams, resulting in a change in the intensity of the laser light being reflected out of the system. The reflected laser signal is in fact tremendously sensitive to the motion of the proof mass, with displacements as small as a few femtometers (roughly the diameter of a proton) being probed on the timescale of a second.

It turns out that because the cavity and proof mass are so small, the light bouncing back and forth in the system pushes the proof mass—and in a special way: when the proof mass moves away, the light helps push it further, and when the proof mass moves closer, the light pulls it in. In short, the laser light softens and damps the proof mass's motion.

"Most sensors are completely limited by thermal noise, or mechanical vibrations—they jiggle around at room temperature, and applied accelerations get lost in that noise," Painter says. "In our device, the light applies a force that tends to reduce the thermal motion, cooling the system." This cooling—down to a temperature of three kelvins (about –270°C) in the current devices—increases the range of accelerations that the device can measure, making it capable of measuring both extremely small and extremely large accelerations.

"We made a very sensitive sensor that, at the same time, can also measure very large accelerations, which is valuable in many applications," Painter says.

The team envisions its optical accelerometers becoming integrated with lasers and detectors in silicon microchips. Microelectronics companies have been working for the past 10 or 15 years to try to integrate lasers and optics into their silicon microelectronics. Painter says that a lot of engineering work still needs to be done to make this happen, but adds that "because of the technological advancements that have been made by these companies, it looks like one can actually start making microversions of these very sensitive optical interferometers."

"Professor Painter's research in this area nicely illustrates how the Engineering and Applied Science faculty at Caltech are working at the edges of fundamental science to invent the technologies of the future," says Ares Rosakis, chair of Caltech's Division of Engineering and Applied Science.  "It is very exciting to envision the ways this research might transform the microelectronics industry and our daily lives."

The lead authors on the paper, titled "A high-resolution microchip optomechanical accelerometer," have all worked in Painter's lab. Alexander Krause and Tim Blasius are currently graduate students at Caltech, while Martin Winger is a former postdoctoral scholar who now works for a sensor company called Sensirion in Zurich, Switzerland. This work was performed in collaboration with Qiang Lin, a former postdoctoral scholar of the Painter group, who now leads his own research group at the University of Rochester. The work is supported by the Defense Advanced Research Projects Administration QuASaR program, the National Science Foundation Graduate Research Fellowship Program, and Intellectual Ventures.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 

How I Spent My Summer Vacation

A SURF Video Diary

Last summer, Caltech junior Julie Jester worked on a project that might one day partially counteract blindness caused by a deteriorating retina. Her job: to help Assistant Professor of Electrical Engineering Azita Emami and her graduate students create the communications link between a tiny camera and a novel wireless neural stimulator that can be surgically inserted into the eye.

Now in its 34th year, Caltech's Summer Undergraduate Research Fellowships (SURF) program has paired nearly 7,000 students with real-world, hands-on projects in the labs of Caltech faculty and JPL staff.

 

Writer: 
Doug Smith
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Pages

Subscribe to RSS - research_news