Planets Abound

Caltech-led astronomers estimate that at least 100 billion planets populate the galaxy

PASADENA, Calif.—Look up at the night sky and you'll see stars, sure. But you're also seeing planets—billions and billions of them. At least.

That's the conclusion of a new study by astronomers at the California Institute of Technology (Caltech) that provides yet more evidence that planetary systems are the cosmic norm. The team made their estimate while analyzing planets orbiting a star called Kepler-32—planets that are representative, they say, of the vast majority in the galaxy and thus serve as a perfect case study for understanding how most planets form.

"There's at least 100 billion planets in the galaxy—just our galaxy," says John Johnson, assistant professor of planetary astronomy at Caltech and coauthor of the study, which was recently accepted for publication in the Astrophysical Journal. "That's mind-boggling."

"It's a staggering number, if you think about it," adds Jonathan Swift, a postdoc at Caltech and lead author of the paper. "Basically there's one of these planets per star."

The planetary system in question, which was detected by the NASA's Kepler space telescope, contains five planets. The existence of two of those planets have already been confirmed by other astronomers. The Caltech team confirmed the remaining three, then analyzed the five-planet system and compared it to other systems found by the Kepler mission.

The planets orbit a star that is an M dwarf—a type that accounts for about three-quarters of all stars in the Milky Way. The five planets, which are similar in size to Earth and orbit close to their star, are also typical of the class of planets that the telescope has discovered orbiting other M dwarfs, Swift says. Therefore, the majority of planets in the galaxy probably have characteristics comparable to those of the five planets.

While this particular system may not be unique, what does set it apart is its coincidental orientation: the orbits of the planets lie in a plane that's positioned such that Kepler views the system edge-on. Due to this rare orientation, each planet blocks Kepler-32's starlight as it passes between the star and the Kepler telescope.

By analyzing changes in the star's brightness, the astronomers were able to determine the planets' characteristics, such as their sizes and orbital periods. This orientation therefore provides an opportunity to study the system in great detail—and because the planets represent the vast majority of planets that are thought to populate the galaxy, the team says, the system also can help astronomers better understand planet formation in general.

"I usually try not to call things 'Rosetta stones,' but this is as close to a Rosetta stone as anything I've seen," Johnson says. "It's like unlocking a language that we're trying to understand—the language of planet formation."

One of the fundamental questions regarding the origin of planets is how many of them there are. Like the Caltech group, other teams of astronomers have estimated that there is roughly one planet per star, but this is the first time researchers have made such an estimate by studying M-dwarf systems, the most numerous population of planets known.

To do that calculation, the Caltech team determined the probability that an M-dwarf system would provide Kepler-32's edge-on orientation. Combining that probability with the number of planetary systems Kepler is able to detect, the astronomers calculated that there is, on average, one planet for every one of the approximately 100 billion stars in the galaxy. But their analysis only considers planets that are in close orbits around M dwarfs—not the outer planets of an M-dwarf system, or those orbiting other kinds of stars. As a result, they say, their estimate is conservative. In fact, says Swift, a more accurate estimate that includes data from other analyses could lead to an average of two planets per star.

M-dwarf systems like Kepler-32's are quite different from our own solar system. For one, M dwarfs are cooler and much smaller than the sun. Kepler-32, for example, has half the mass of the sun and half its radius. The radii of its five planets range from 0.8 to 2.7 times that of Earth, and those planets orbit extremely close to their star. The whole system fits within just over a tenth of an astronomical unit (the average distance between Earth and the sun)—a distance that is about a third of the radius of Mercury's orbit around the sun. The fact that M-dwarf systems vastly outnumber other kinds of systems carries a profound implication, according to Johnson, which is that our solar system is extremely rare. "It's just a weirdo," he says.

The fact that the planets in M-dwarf systems are so close to their stars doesn't necessarily mean that they're fiery, hellish worlds unsuitable for life, the astronomers say. Indeed, because M dwarfs are small and cool, their temperate zone—also known as the "habitable zone," the region where liquid water might exist—is also further inward. Even though only the outermost of Kepler-32's five planets lies in its temperate zone, many other M dwarf systems have more planets that sit right in their temperate zones. 

As for how the Kepler-32 system formed, no one knows yet. But the team says its analysis places constraints on possible mechanisms. For example, the results suggest that the planets all formed farther away from the star than they are now, and migrated inward over time.

Like all planets, the ones around Kepler-32 formed from a proto-planetary disk—a disk of dust and gas that clumped up into planets around the star. The astronomers estimated that the mass of the disk within the region of the five planets was about as much as that of three Jupiters. But other studies of proto-planetary disks have shown that three Jupiter masses can't be squeezed into such a tiny area so close to a star, suggesting to the Caltech team that the planets around Kepler-32 initially formed farther out.

Another line of evidence relates to the fact that M dwarfs shine brighter and hotter when they are young, when planets would be forming. Kepler-32 would have been too hot for dust—a key planet-building ingredient—to even exist in such close proximity to the star. Previously, other astronomers had determined that the third and fourth planets from the star are not very dense, meaning that they are likely made of volatile compounds such as carbon dioxide, methane, or other ices and gases, the Caltech team says. However, those volatile compounds could not have existed in the hotter zones close to the star.

Finally, the Caltech astronomers discovered that three of the planets have orbits that are related to one another in a very specific way. One planet's orbital period lasts twice as long as another's, and the third planet's lasts three times as long as the latter's. Planets don't fall into this kind of arrangement immediately upon forming, Johnson says. Instead, the planets must have started their orbits farther away from the star before moving inward over time and settling into their current configuration.

"You look in detail at the architecture of this very special planetary system, and you're forced into saying these planets formed farther out and moved in," Johnson explains.

The implications of a galaxy chock full of planets are far-reaching, the researchers say. "It's really fundamental from an origins standpoint," says Swift, who notes that because M dwarfs shine mainly in infrared light, the stars are invisible to the naked eye. "Kepler has enabled us to look up at the sky and know that there are more planets out there than stars we can see."

In addition to Swift and Johnson, the other authors on the Astrophysical Journal paper are Caltech graduate students Timothy Morton and Benjamin Montet; Caltech postdoc Philip Muirhead; former Caltech postdoc Justin Crepp of the University of Notre Dame; and Caltech alumnus Daniel Fabrycky (BS '03) of the University of Chicago. The title of the paper is, "Characterizing the cool KOIS IV: Kepler-32 as a prototype for the formation of compact planetary systems throughout the galaxy." In addition to using Kepler, the astronomers made observations at the W. M. Keck Observatory and with the Robo-AO system at Palomar Observatory. Support for all of the telescopes was provided by the W. M. Keck Foundation, NASA, Caltech, the Inter-University Centre for Astronomy and Astrophysics, the National Science Foundation, the Mt. Cuba Astronomical Foundation, and Samuel Oschin.

Writer: 
Marcus Woo
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Unlocking New Talents in Nature

Caltech protein engineers create new biocatalysts

PASADENA, Calif.—Protein engineers at the California Institute of Technology (Caltech) have tapped into a hidden talent of one of nature's most versatile catalysts. The enzyme cytochrome P450 is nature's premier oxidation catalyst—a protein that typically promotes reactions that add oxygen atoms to other chemicals. Now the Caltech researchers have engineered new versions of the enzyme, unlocking its ability to drive a completely different and synthetically useful reaction that does not take place in nature. 

The new biocatalysts can be used to make natural products—such as hormones, pheromones, and insecticides—as well as pharmaceutical drugs, like antibiotics, in a "greener" way.

"Using the power of protein engineering and evolution, we can convince enzymes to take what they do poorly and do it really well," says Frances Arnold, the Dick and Barbara Dickinson Professor of Chemical Engineering, Bioengineering and Biochemistry at Caltech and principal investigator on a paper about the enzymes that appears online in Science. "Here, we've asked a natural enzyme to catalyze a reaction that had been devised by chemists but that nature could never do."

Arnold's lab has been working for years with a bacterial cytochrome P450. In nature, enzymes in this family insert oxygen into a variety of molecules that contain either a carbon-carbon double bond or a carbon-hydrogen single bond. Most of these insertions require the formation of a highly reactive intermediate called an oxene.

Arnold and her colleagues Pedro Coelho and Eric Brustad noted that this reaction has a lot in common with another reaction that synthetic chemists came up with to create products that incorporate a cyclopropane—a chemical group containing three carbon atoms arranged in a triangle. Cyclopropanes are a necessary part of many natural-product intermediates and pharmaceuticals, but nature forms them through a complicated series of steps that no chemist would want to replicate.

"Nature has a limited chemical repertoire," Brustad says. "But as chemists, we can create conditions and use reagents and substrates that are not available to the biological world."

The cyclopropanation reaction that the synthetic chemists came up with inserts carbon using intermediates called carbenes, which have an electronic structure similar to oxenes. This reaction provides a direct route to the formation of diverse cyclopropane-containing products that would not be accessible by natural pathways. However, even this reaction is not a perfect solution because some of the solvents needed to run the reaction are toxic, and it is typically driven by catalysts based on expensive transition metals, such as copper and rhodium. Furthermore, tweaking these catalysts to predictably make specific products remains a significant challenge—one the researchers hoped nature could overcome with evolution's help.

Given the similarities between the two reaction systems—cytochrome P450's natural oxidation reactions and the synthetic chemists' cyclopropanation reaction— Arnold and her colleagues argued that it might be possible to convince the bacterial cytochrome P450 to create cyclopropane-bearing compounds through this more direct route.  Their experiments showed that the natural enzyme (cytochrome P450) could in fact catalyze the reaction, but only very poorly; it generated a low yield of products, didn't make the specific mix of products desired, and catalyzed the reaction only a few times. In comparison, transition-metal catalysts can be used hundreds of times. 

That's where protein engineering came in. Over the years, Arnold's lab has created thousands of cytochrome P450 variants by mutating the enzyme's natural sequence of amino acids, using a process called directed evolution. The researchers tested variants from their collections to see how well they catalyzed the cyclopropane-forming reaction. A handful ended up being hits, driving the reaction hundreds of times. 

Being able to catalyze a reaction is a crucial first step, but for a chemical process to be truly useful it has to generate high yields of specific products. Many chemical compounds exist in more than one form, so although the chemical formulas of various products may be identical, they might, for example, be mirror images of each other or have slightly different bonding structures, leading to dissimilar behavior. Therefore, being able to control what forms are produced and in what ratio—a quality called selectivity—is especially important.

Controlling selectivity is difficult. It is something that chemists struggle to do, while nature excels at it. That was another reason Arnold and her team wanted to investigate cytochrome P450's ability to catalyze the reaction.

"We should be able to marry the impressive repertoire of catalysts that chemists have invented with the power of nature to do highly selective chemistry under green conditions," Arnold says.

So the researchers further "evolved" enzyme variants that had worked well in the cyclopropanation reaction, to come up with a spectrum of new enzymes. And those enzymes worked—they were able to drive the reaction many times and produced many of the selectivities a chemist could desire for various substrates.  

Coelho says this work highlights the utility of synthetic chemistry in expanding nature's catalytic potential. "This field is still in its infancy," he says. "There are many more reactions out there waiting to be installed in the biological world."

The paper, "Olefin cyclopropanation via carbene insertion catalyzed by engineered cytochrome P450 enzymes," was also coauthored by Arvind Kannan, now a Churchill Scholar at Cambridge University; Brustad is now an assistant professor at the University of North Carolina at Chapel Hill. The work was supported by a grant from the U.S. Department of Energy and startup funds from UNC Chapel Hill.

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Research Update: Wordy Worms and Their Eavesdropping Predators

For over 25 years, Paul Sternberg has been studying worms—how they develop, why they sleep, and, more recently, how they communicate. Now, he has flipped the script a bit by taking a closer look at how predatory fungi may be tapping into worm conversations to gain clues about their whereabouts.

Nematodes, Sternberg's primary worm interest, are found in nearly every corner of the world and are one of the most abundant animals on the planet. Unsurprisingly, they have natural enemies, including numerous types of carnivorous fungi that build traps to catch their prey. Curious to see how nematophagous fungi might sense that a meal is present without the sensory organs—like eyes or noses—that most predators use, Sternberg and Yen-Ping Hsueh, a postdoctoral scholar in biology at Caltech, started with a familiar tool: ascarosides. These are the chemical cues that nematodes use to "talk" to one another.

"If we think about it from an evolutionary perspective, whatever the worms are making that can be sensed by the nematophagous fungi must be very important to the worm—otherwise, it's not worth the risk," explains Hsueh. "I thought that ascarosides perfectly fit this hypothesis."

In order to test their idea, the team first evaluated whether different ascarosides caused one of the most common nematode-trapping fungi species to start making a trap. Indeed, it responded by building sticky, web-like nets called adhesive networks, but only when it was nutrient-deprived. It takes a lot of energy for the fungi to build a trap, so they'll only do it if they are hungry and they sense that prey is nearby. Moreover, this ascaroside-induced response is conserved in three other closely related species. But, the researchers say, each of the four fungal species responded to different sets of ascarosides.

"This fits with the idea that different types of predators might encounter different types of prey in nature, and also raises the possibility that fungi could 'read' the different dialects of each worm type," says Sternberg. "What's cool is that we've shown the ability for a predator to eavesdrop on essential prey communication. The worms have to talk to each other using these chemicals, and the predator is listening in on it—that's how it knows the worms are there."

Sternberg and Hsueh also tested a second type of fungus that uses a constricting ring to trap the worms, but it did not respond to the ascarosides. However, the team says that because they only tested a handful of the chemical cues, it's possible that they simply did not test the right ones for that type of fungus.

"Next, the focus is to really study the molecular mechanism in the fungi—how does a fungus sense the ascarosides, and what are the downstream pathways that induce the trap formation," says Hsueh. "We are also interested in evolutionary question of why we see this ascaroside sensing in some types of fungi but not others."  

In the long run, their findings may help improve methods for pest management. Some of these fungi are used for biocontrol to try and keep nematodes away from certain plant roots. Knowing more about what stimulates the organisms to make traps might allow for the development of better biocontrol preparations, says Sternberg.

The full results of Sternberg and Hsueh's study can be found in the paper, "Nematode-trapping fungi eavesdrop on nematode pheromones," published in the journal Current Biology

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Caltech-Led Astronomers Discover Galaxies Near Cosmic Dawn

Researchers conduct first census of the most primitive and distant galaxies seen

PASADENA, Calif.—A team of astronomers led by the California Institute of Technology (Caltech) has used NASA's Hubble Space Telescope to discover seven of the most primitive and distant galaxies ever seen.

One of the galaxies, the astronomers say, might be the all-time record holder—the galaxy as observed existed when the universe was merely 380 million years old. All of the newly discovered galaxies formed more than 13 billion years ago, when the universe was just about 4 percent of its present age, a period astronomers call the "cosmic dawn," when the first galaxies were born. The universe is now 13.7 billion years old.

The new observations span a period between 350 million and 600 million years after the Big Bang and represent the first reliable census of galaxies at such an early time in cosmic history, the team says. The astronomers found that the number of galaxies steadily increased as time went on, supporting the idea that the first galaxies didn't form in a sudden burst but gradually assembled their stars.

Because it takes light billions of years to travel such vast distances, astronomical images show how the universe looked during the period, billions of years ago, when that light first embarked on its journey. The farther away astronomers peer into space, the further back in time they are looking.

In the new study, which was recently accepted for publication in the Astrophysical Journal Letters, the team has explored the deepest reaches of the cosmos—and therefore the most distant past—that has ever been studied with Hubble.

"We've made the longest exposure that Hubble has ever taken, capturing some of the faintest and most distant galaxies," says Richard Ellis, the Steele Family Professor of Astronomy at Caltech and the first author of the paper. "The added depth and our carefully designed observing strategy have been the key features of our campaign to reliably probe this early period of cosmic history."

The results are the first from a new Hubble survey that focused on a small patch of sky known as the Hubble Ultra Deep Field (HUDF), which was first studied nine years ago. The astronomers used Hubble's Wide Field Camera 3 (WFC3) to observe the HUDF in near-infrared light over a period of six weeks during August and September 2012.

To determine the distances to these galaxies, the team measured their colors using four filters that allow Hubble to capture near-infrared light at specific wavelengths. "We employed a filter that has not been used in deep imaging before, and undertook much deeper exposures in some filters than in earlier work, in order to convincingly reject the possibility that some of our galaxies might be foreground objects," says team member James Dunlop of the Institute for Astronomy at the University of Edinburgh.

The carefully chosen filters allowed the astronomers to measure the light that was absorbed by neutral hydrogen, which filled the universe beginning about 400,000 years after the Big Bang. Stars and galaxies started to form roughly 200 million years after the Big Bang. As they did, they bathed the cosmos with ultraviolet light, which ionized the neutral hydrogen by stripping an electron from each hydrogen atom. This so-called "epoch of reionization" lasted until the universe was about a billion years old.

If everything in the universe were stationary, astronomers would see that only a specific wavelength of light was absorbed by neutral hydrogen. But the universe is expanding, and this stretches the wavelengths of light coming from galaxies. The amount that the light is stretched—called the redshift—depends on distance: the farther away a galaxy is, the greater the redshift.

As a result of this cosmic expansion, astronomers observe that the absorption of light by neutral hydrogen occurs at longer wavelengths for more distant galaxies. The filters enabled the researchers to determine at which wavelength the light was absorbed; this revealed the distance to the galaxy—and therefore the period in cosmic history when it is being formed. Using this technique to penetrate further and further back in time, the team found a steadily decreasing number of galaxies.

"Our data confirms that reionization is a drawn-out process occurring over several hundred million years with galaxies slowly building up their stars and chemical elements," says coauthor Brant Robertson of the University of Arizona in Tucson. "There wasn't a single dramatic moment when galaxies formed; it's a gradual process."

The new observations—which pushed Hubble to its technical limits—hint at what is to come with next-generation infrared space telescopes, the researchers say. To probe even further back in time to see ever more primitive galaxies, astronomers will need to observe in wavelengths longer than those that can be detected by Hubble. That's because cosmic expansion has stretched the light from the most distant galaxies so much that they glow predominantly in the infrared. The upcoming James Webb Space Telescope, slated for launch in a few years, will target those galaxies.

"Although we may have reached back as far as Hubble will see, Hubble has, in a sense, set the stage for Webb," says team member Anton Koekemoer of the Space Telescope Science Institute in Baltimore. "Our work indicates there is a rich field of even earlier galaxies that Webb will be able to study."

The title of the Astrophysical Journal Letters paper is, "The Abundance of Star-Forming Galaxies in the Redshift Range 8.5 to 12: New Results from the 2012 Hubble Ultra Deep Field Campaign." In addition to Ellis, Dunlop, Robertson, and Koekemoer, the other authors on the Astrophysical Journal Letters paper are Matthew Schenker of Caltech; Ross McLure, Rebecca Bowler, Alexander Rogers, Emma Curtis-Lake, and Michele Cirasuolo of the Institute for Astronomy at the University of Edinburgh; Yoshiaki Ono and Masami Ouchi of the University of Tokyo; Evan Schneider of the University of Arizona; Daniel Stark of the University of Cambridge; Stéphane Charlot of the Institut d'Astrophysique de Paris; and Steven Furlanetto of UCLA. The research was supported by the Space Telescope Science Institute, the European Research Council, the Royal Society, and the Leverhulme Trust.

Science Contacts:

Richard Ellis, Steele Professor of Astronomy
rse@astro.caltech.edu
(626) 676-5530

Matt Schenker, graduate student
schenker@astro.caltech.edu
(516) 428-0587

Writer: 
Marcus Woo
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Social Synchronicity

New Caltech-led research finds a connection between bonding and matched movements

PASADENA, Calif.—Humans have a tendency to spontaneously synchronize their movements. For example, the footsteps of two friends walking together may synchronize, although neither individual is consciously aware that it is happening. Similarly, the clapping hands of an audience will naturally fall into synch. Although this type of synchronous body movement has been observed widely, its neurological mechanism and its role in social interactions remain obscure. In a new study, led by cognitive neuroscientists at the California Institute of Technology (Caltech), researchers found that body-movement synchronization between two participants increases following a short session of cooperative training, suggesting that our ability to synchronize body movements is a measurable indicator of social interaction.

"Our findings may provide a powerful tool for identifying the neural underpinnings of both normal social interactions and impaired social interactions, such as the deficits that are often associated with autism," says Shinsuke Shimojo, Gertrude Baltimore Professor of Experimental Psychology at Caltech and senior author of the study.

Shimojo, along with former postdoctoral scholar Kyongsik Yun, and Katsumi Watanabe, an associate professor at the University of Tokyo, presented their work in a paper published December 11 in Scientific Reports, an online and open-access journal from the Nature Publishing Group.

For their study, the team evaluated the hypothesis that synchronous body movement is the basis for more explicit social interaction by measuring the amount of fingertip movement between two participants who were instructed to extend their arms and point their index fingers toward one another—much like the famous scene in E.T. between the alien and Elliott. They were explicitly instructed to keep their own fingers as stationary as possible while keeping their eyes open. The researchers simultaneously recorded the neuronal activity of each participant using electroencephalography, or EEG, recordings. Their finger positions in space were recorded by a motion-capture system.

The participants repeated the task eight times; the first two rounds were called pretraining sessions and the last two were posttraining sessions. The four sessions in between were the cooperative training sessions, in which one person—a randomly chosen leader—made a sequence of large finger movements, and the other participant was instructed to follow the movements. In the posttraining sessions, finger-movement correlation between the two participants was significantly higher compared to that in the pretraining sessions. In addition, socially and sensorimotor-related brain areas were more synchronized between the brains, but not within the brain, in the posttraining sessions. According to the researchers, this experiment, while simple, is novel in that it allows two participants to interact subconsciously while the amount of movement that could potentially disrupt measurement of the neural signal is minimized.

"The most striking outcome of our study is that not only the body-body synchrony but also the brain-brain synchrony between the two participants increased after a short period of social interaction," says Yun. "This may open new vistas to study the brain-brain interface. It appears that when a cooperative relationship exists, two brains form a loose dynamic system."

The team says this information may be potentially useful for romantic or business partner selection.

"Because we can quantify implicit social bonding between two people using our experimental paradigm, we may be able to suggest a more socially compatible partnership in order to maximize matchmaking success rates, by preexamining body synchrony and its increase during a short cooperative session" explains Yun.

As part of the study, the team also surveyed the subjects to rank certain social personality traits, which they then compared to individual rates of increased body synchrony. For example, they found that the participants who expressed the most social anxiety showed the smallest increase in synchrony after cooperative training, while those who reported low levels of anxiety had the highest increases in synchrony. The researchers plan to further evaluate the nature of the direct causal relationship between synchronous body movement and social bonding. Further studies may explore whether a more complex social interaction, such as singing together or being teamed up in a group game, increases synchronous body movements among the participants.

"We may also apply our experimental protocol to better understand the nature and the neural correlates of social impairment in disorders where social deficits are a common symptom, as in schizophrenia or autism," says Shimojo.

The title of the Scientific Reports paper is "Interpersonal body and neural synchronization as a marker of implicit social interaction." Funding for this research was provided by the Japan Science and Technology Agency's CREST and the Tamagawa-Caltech gCOE (global Center Of Excellence) programs.

Writer: 
Katie Neith
Frontpage Title: 
Social Synchronicity: A Connection Between Bonding and Matched Movements
Listing Title: 
Social Synchronicity: A Connection Between Bonding and Matched Movements
Writer: 
Exclude from News Hub: 
No

Top 12 in 2012

Frontpage Title: 
Top 12 in 2012
Slideshow: 
Credit: Benjamin Deverman/Caltech

Gene therapy for boosting nerve-cell repair

Caltech scientists have developed a gene therapy that helps the brain replace its nerve-cell-protecting myelin sheaths—and the cells that produce those sheaths—when they are destroyed by diseases like multiple sclerosis and by spinal-cord injuries. Myelin ensures that nerve cells can send signals quickly and efficiently.

Credit: L. Moser and P. M. Bellan, Caltech

Understanding solar flares

By studying jets of plasma in the lab, Caltech researchers discovered a surprising phenomenon that may be important for understanding how solar flares occur and for developing nuclear fusion as an energy source. Solar flares are bursts of energy from the sun that launch chunks of plasma that can damage orbiting satellites and cause the northern and southern lights on Earth.

Coincidence—or physics?

Caltech planetary scientists provided a new explanation for why the "man in the moon" faces Earth. Their research indicates that the "man"—an illusion caused by dark-colored volcanic plains—faces us because of the rate at which the moon's spin rate slowed before becoming locked in its current orientation, even though the odds favored the moon's other, more mountainous side.

Choking when the stakes are high

In studying brain activity and behavior, Caltech biologists and social scientists learned that the more someone is afraid of loss, the worse they will perform on a given task—and that, the more loss-averse they are, the more likely it is that their performance will peak at a level far below their actual capacity.

Credit: NASA/JPL-Caltech

Eyeing the X-ray universe

NASA's NuSTAR telescope, a Caltech-led and -designed mission to explore the high-energy X-ray universe and to uncover the secrets of black holes, of remnants of dead stars, of energetic cosmic explosions, and even of the sun, was launched on June 13. The instrument is the most powerful high-energy X-ray telescope ever developed and will produce images that are 10 times sharper than any that have been taken before at these energies.

Credit: CERN

Uncovering the Higgs Boson

This summer's likely discovery of the long-sought and highly elusive Higgs boson, the fundamental particle that is thought to endow elementary particles with mass, was made possible in part by contributions from a large contingent of Caltech researchers. They have worked on this problem with colleagues around the globe for decades, building experiments, designing detectors to measure particles ever more precisely, and inventing communication systems and data storage and transfer networks to share information among thousands of physicists worldwide.

Credit: Peter Day

Amplifying research

Researchers at Caltech and NASA's Jet Propulsion Laboratory developed a new kind of amplifier that can be used for everything from exploring the cosmos to examining the quantum world. This new device operates at a frequency range more than 10 times wider than that of other similar kinds of devices, can amplify strong signals without distortion, and introduces the lowest amount of unavoidable noise.

Swims like a jellyfish

Caltech bioengineers partnered with researchers at Harvard University to build a freely moving artificial jellyfish from scratch. The researchers fashioned the jellyfish from silicon and muscle cells into what they've dubbed Medusoid; in the lab, the scientists were able to replicate some of the jellyfish's key mechanical functions, such as swimming and creating feeding currents. The work will help improve researchers' understanding of tissues and how they work, and may inform future efforts in tissue engineering and the design of pumps for the human heart.

Credit: NASA/JPL-Caltech

Touchdown confirmed

After more than eight years of planning, about 354 million miles of space travel, and seven minutes of terror, NASA's Mars Science Laboratory successfully landed on the Red Planet on August 5. The roving analytical laboratory, named Curiosity, is now using its 10 scientific instruments and 17 cameras to search Mars for environments that either were once—or are now—habitable.

Credit: Caltech/Michael Hoffmann

Powering toilets for the developing world

Caltech engineers built a solar-powered toilet that can safely dispose of human waste for just five cents per use per day. The toilet design, which won the Bill and Melinda Gates Foundation's Reinventing the Toilet Challenge, uses the sun to power a reactor that breaks down water and human waste into fertilizer and hydrogen. The hydrogen can be stored as energy in hydrogen fuel cells.

Credit: Caltech / Scott Kelberg and Michael Roukes

Weighing molecules

A Caltech-led team of physicists created the first-ever mechanical device that can measure the mass of an individual molecule. The tool could eventually help doctors to diagnose diseases, and will enable scientists to study viruses, examine the molecular machinery of cells, and better measure nanoparticles and air pollution.

Splitting water

This year, two separate Caltech research groups made key advances in the quest to extract hydrogen from water for energy use. In June, a team of chemical engineers devised a nontoxic, noncorrosive way to split water molecules at relatively low temperatures; this method may prove useful in the application of waste heat to hydrogen production. Then, in September, a group of Caltech chemists identified the mechanism by which some water-splitting catalysts work; their findings should light the way toward the development of cheaper and better catalysts.

Body: 

In 2012, Caltech faculty and students pursued research into just about every aspect of our world and beyond—from understanding human behavior, to exploring other planets, to developing sustainable waste solutions for the developing world.

In other words, 2012 was another year of discovery at Caltech. Here are a dozen research stories, which were among the most widely read and shared articles from Caltech.edu.

Did we skip your favorite? Connect with Caltech on Facebook to share your pick.

Exclude from News Hub: 
Yes

A New Tool for Secret Agents—And the Rest of Us

Caltech engineers make tiny, low-cost, terahertz imager chip

PASADENA, Calif.—A secret agent is racing against time. He knows a bomb is nearby. He rounds a corner, spots a pile of suspicious boxes in the alleyway, and pulls out his cell phone. As he scans it over the packages, their contents appear onscreen. In the nick of time, his handy smartphone application reveals an explosive device, and the agent saves the day. 

Sound far-fetched? In fact it is a real possibility, thanks to tiny inexpensive silicon microchips developed by a pair of electrical engineers at the California Institute of Technology (Caltech). The chips generate and radiate high-frequency electromagnetic waves, called terahertz (THz) waves, that fall into a largely untapped region of the electromagnetic spectrum—between microwaves and far-infrared radiation—and that can penetrate a host of materials without the ionizing damage of X-rays. 

When incorporated into handheld devices, the new microchips could enable a broad range of applications in fields ranging from homeland security to wireless communications to health care, and even touchless gaming. In the future, the technology may lead to noninvasive cancer diagnosis, among other applications.

"Using the same low-cost, integrated-circuit technology that's used to make the microchips found in our cell phones and notepads today, we have made a silicon chip that can operate at nearly 300 times their speed," says Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering at Caltech. "These chips will enable a new generation of extremely versatile sensors." 

Hajimiri and postdoctoral scholar Kaushik Sengupta (PhD '12) describe the work in the December issue of IEEE Journal of Solid-State Circuits

Researchers have long touted the potential of the terahertz frequency range, from 0.3 to 3 THz, for scanning and imaging. Such electromagnetic waves can easily penetrate packaging materials and render image details in high resolution, and can also detect the chemical fingerprints of pharmaceutical drugs, biological weapons, or illegal drugs or explosives. However, most existing terahertz systems involve bulky and expensive laser setups that sometimes require exceptionally low temperatures. The potential of terahertz imaging and scanning has gone untapped because of the lack of compact, low-cost technology that can operate in the frequency range.

To finally realize the promise of terahertz waves, Hajimiri and Sengupta used complementary metal-oxide semiconductor, or CMOS, technology, which is commonly used to make the microchips in everyday electronic devices, to design silicon chips with fully integrated functionalities and that operate at terahertz frequencies—but fit on a fingertip.

"This extraordinary level of creativity, which has enabled imaging in the terahertz frequency range, is very much in line with Caltech's long tradition of innovation in the area of CMOS technology," says Ares Rosakis, chair of Caltech's Division of Engineering and Applied Science. "Caltech engineers, like Ali Hajimiri, truly work in an interdisciplinary way to push the boundaries of what is possible."

The new chips boast signals more than a thousand times stronger than existing approaches, and emanate terahertz signals that can be dynamically programmed to point in a specified direction, making them the world's first integrated terahertz scanning arrays.

Using the scanner, the researchers can reveal a razor blade hidden within a piece of plastic, for example, or determine the fat content of chicken tissue. "We are not just talking about a potential. We have actually demonstrated that this works," says Hajimiri. "The first time we saw the actual images, it took our breath away." 

Hajimiri and Sengupta had to overcome multiple hurdles to translate CMOS technology into workable terahertz chips—including the fact that silicon chips are simply not designed to operate at terahertz frequencies. In fact, every transistor has a frequency, known as the cut-off frequency, above which it fails to amplify a signal—and no standard transistors can amplify signals in the terahertz range. 

To work around the cut-off-frequency problem, the researchers harnessed the collective strength of many transistors operating in unison. If multiple elements are operated at the right times at the right frequencies, their power can be combined, boosting the strength of the collective signal. 

"We came up with a way of operating transistors above their cut-off frequencies," explains Sengupta. "We are about 40 or 50 percent above the cut-off frequencies, and yet we are able to generate a lot of power and detect it because of our novel methodologies."

"Traditionally, people have tried to make these technologies work at very high frequencies, with large elements producing the power. Think of these as elephants," says Hajimiri. "Nowadays we can make a very large number of transistors that individually are not very powerful, but when combined and working in unison, can do a lot more. If these elements are synchronized—like an army of ants—they can do everything that the elephant does and then some."

The researchers also figured out how to radiate, or transmit, the terahertz signal once it has been produced. At such high frequencies, a wire cannot be used, and traditional antennas at the microchip scale are inefficient. What they came up with instead was a way to turn the whole silicon chip into an antenna. Again, they went with a distributed approach, incorporating many small metal segments onto the chip that can all be operated at a certain time and strength to radiate the signal en masse.

"We had to take a step back and ask, 'Can we do this in a different way?'" says Sengupta. "Our chips are an example of the kind of innovations that can be unearthed if we blur the partitions between traditional ways of thinking about integrated circuits, electromagnetics, antennae, and the applied sciences. It is a holistic solution."

 The paper is titled "A 0.28 THz Power-Generation and Beam-Steering Array in CMOS Based on Distributed Active Radiators." IBM helped with chip fabrication for this work.

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Point of Light

Caltech engineers invent light-focusing device that may lead to applications in computing, communications, and imaging

PASADENA, Calif.—As technology advances, it tends to shrink. From cell phones to laptops—powered by increasingly faster and tinier processors—everything is getting thinner and sleeker. And now light beams are getting smaller, too.

Engineers at the California Institute of Technology (Caltech) have created a device that can focus light into a point just a few nanometers (billionths of a meter) across—an achievement they say may lead to next-generation applications in computing, communications, and imaging.

Because light can carry greater amounts of data more efficiently than electrical signals traveling through copper wires, today's technology is increasingly based on optics. The world is already connected by thousands of miles of optical-fiber cables that deliver email, images, and the latest video gone viral to your laptop.

As we all produce and consume more data, computers and communication networks must be able to handle the deluge of information. Focusing light into tinier spaces can squeeze more data through optical fibers and increase bandwidth. Moreover, by being able to control light at such small scales, optical devices can also be made more compact, requiring less energy to power them.

But focusing light to such minute scales is inherently difficult. Once you reach sizes smaller than the wavelength of light—a few hundred nanometers in the case of visible light—you reach what's called the diffraction limit, and it's physically impossible to focus the light any further.

But now the Caltech researchers, co-led by assistant professor of electrical engineering Hyuck Choo, have built a new kind of waveguide—a tunnellike device that channels light—that gets around this natural limit. The waveguide, which is described in a recent issue of the journal Nature Photonics, is made of amorphous silicon dioxide—which is similar to common glass—and is covered in a thin layer of gold. Just under two microns long, the device is a rectangular box that tapers to a point at one end.

As light is sent through the waveguide, the photons interact with electrons at the interface between the gold and the silicon dioxide. Those electrons oscillate, and the oscillations propagate along the device as waves—similarly to how vibrations of air molecules travel as sound waves. Because the electron oscillations are directly coupled with the light, they carry the same information and properties—and they therefore serve as a proxy for the light.

Instead of focusing the light alone—which is impossible due to the diffraction limit—the new device focuses these coupled electron oscillations, called surface plasmon polaritons (SPPs). The SPPs travel through the waveguide and are focused as they go through the pointy end.

Because the new device is built on a semiconductor chip with standard nanofabrication techniques, says Choo, the co-lead and the co-corresponding author of the paper, it is easy integrate with today's technology

Previous on-chip nanofocusing devices were only able to focus light into a narrow line. They also were inefficient, typically focusing only a few percent of the incident photons, with the majority absorbed and scattered as they traveled through the devices.

With the new device, light can ultimately be focused in three dimensions, producing a point a few nanometers across, and using half of the light that's sent through, Choo says. (Focusing the light into a slightly bigger spot, 14 by 80 nanometers in size, boosts the efficiency to 70 percent). The key feature behind the device's focusing ability and efficiency, he says, is its unique design and shape.

"Our new device is based on fundamental research, but we hope it's a good building block for many potentially revolutionary engineering applications," says Myung-Ki Kim, a postdoctoral scholar and the other lead author of the paper.

For example, one application is to turn this nanofocusing device into an efficient, high-resolution biological-imaging instrument, Kim says. A biologist can dye specific molecules in a cell with fluorescent proteins that glow when struck by light. Using the new device, a scientist can focus light into the cell, causing the fluorescent proteins to shine. Because the device concentrates light into such a small point, it can create a high-resolution map of those dyed molecules. Light can also travel in the reverse direction through the nanofocuser: by collecting light through the narrow point, the device turns into a high-resolution microscope. 

The device can also lead to computer hard drives that hold more memory via heat-assisted magnetic recording. Normal hard drives consist of rows of tiny magnets whose north and south poles lay end to end. Data is recorded by applying a magnetic field to switch the polarity of the magnets.

Smaller magnets would allow more memory to be squeezed into a disc of a given size. But the polarities of smaller magnets made of current materials are unstable at room temperature, causing the magnetic poles to spontaneously flip—and for data to be lost. Instead, more stable materials can be used—but those require heat to record data. The heat makes the magnets more susceptible to polarity reversals. Therefore, to write data, a laser is needed to heat the individual magnets, allowing a surrounding magnetic field to flip their polarities.

Today's technology, however, can't focus a laser into a beam that is narrow enough to individually heat such tiny magnets. Indeed, current lasers can only concentrate a beam to an area 300 nanometers wide, which would heat the target magnet as well as adjacent ones—possibly spoiling other recorded data.

Because the new device can focus light down to such small scales, it can heat smaller magnets individually, making it possible for hard drives to pack more magnets and therefore more memory. With current technology, discs can't hold more than 1 terabyte (1,000 gigabytes) per square inch. A nanofocusing device, Choo says, can bump that to 50 terabytes per square inch.

Then there's the myriad of data-transfer and communication applications, the researchers say. As computing becomes increasingly reliant on optics, devices that concentrate and control data-carrying light at the nanoscale will be essential—and ubiquitous, says Choo, who is a member of the Kavli Nanoscience Institute at Caltech. "Don't be surprised if you see a similar kind of device inside a computer you may someday buy."

The next step is to optimize the design and to begin building imaging instruments and sensors, Choo says. The device is versatile enough that relatively simple modifications could allow it to be used for imaging, computing, or communication.

The title of the Nature Photonics paper is "Nanofocusing in a metal-insulator-metal gap plasmon waveguide with a three-dimensional linear taper." In addition to Choo and Kim, the other authors are Matteo Staffaroni, Tae Joon Seok, Jeffrey Bokor, Ming C. Wu, and Eli Yablonovitch of UC Berkeley and Stefano Cabrini and P. James Schuck of the Molecular Foundry at Lawrence Berkeley National Lab. The research was funded by the Defense Advanced Research Projects Agency (DARPA) Science and Technology Surface-Enhanced Raman Spectroscopy program, the Department of Energy, and the Division of Engineering and Applied Science at Caltech.

 

 

 

 

 

 

 

 

 

 

 

 

This video shows the final fabrication step of the nanofocusing device. A stream of high-energy gallium ions blasts away unwanted layers of gold and silicon dioxide to carve out the shape of the device.

 

 

Writer: 
Marcus Woo
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

3-D Dentistry

A Caltech imaging innovation will ease your trip to the dentist and may soon energize home entertainment systems too.

Although dentistry has come a long way since the time when decayed teeth were extracted by brute force, most dentists are still using the clumsy, time-consuming, and imperfect impression method when making crowns or bridges. But that process could soon go the way of general anesthesia in family dentistry thanks to a 3-D imaging device developed by Mory Gharib, Caltech vice provost and Hans W. Liepmann Professor of Aeronautics and professor of bioinspired engineering.

By the mid-2000s, complex dental imaging machines—also called dental scanners—began appearing on the market. The devices take pictures of teeth that can be used to create crowns and bridges via computer-aided design/computer-aided manufacturing (CAD/CAM) techniques, giving the patient a new tooth the same day. But efficiency doesn't come without cost—and at more than $100,000 for an entire system, few dentists can afford to invest in the equipment. Within that challenge, Gharib saw an opportunity.

An expert in biomedical engineering, Gharib had built a 3-D microscope in 2006 to help him design better artificial heart valves and other devices for medical applications. Since it's not very practical to view someone's mouth through a microscope, he thought that he could design and build an affordable and portable 3-D camera that would do the same job as the expensive dental scanners.

The system he came up with is surprisingly simple. The camera, which fits into a handheld device, has three apertures that take a picture of the tooth at the same time but from different angles. The three images are then blended together using a computer algorithm to construct a 3-D image. In 2009, Gharib formed a company called Arges Imaging to commercialize the product; last year, Arges was acquired by a multinational dental-technology manufacturer that has been testing the camera with dentists.

"Professor Gharib is as brilliant a scientist as he is an engineer and inventor," says Ares Rosakis, chair of Caltech's division of engineering and applied science. "I think that's what we have to do to look at humanity's big problems: we have to be ready to act as pure scientists when we observe and discover as well as act as practical engineers when we invent and apply. This continuous interplay happens at Caltech better than at other institutions."

Indeed, Gharib did not stop with dental applications for his 3-D scanner, but quickly realized that the technology had promise in other industries. For example, there are many potential applications in consumer electronics and other products, he says. While motion-sensing devices with facial and voice-recognition capabilities, like Microsoft's Kinect for the Xbox 360, allow players to feel like they are in the game— running, jumping, and flying over obstacles—"the gestures required are extreme," says Gharib. A more sophisticated imager could make players really feel like they are part of the action.

In robotic and reconstructive surgery, a 3-D imager could provide surgeons with a tool to help them achieve better accuracy and precision. "What if I could take a 3-D picture of your head and have a machine sculpt it into a bust?" says Gharib. "With CAD/CAM, you can take a computer design and turn that into a sculpture, but you need someone who is expert at programming. What if a camera could take a photo and give you 3-D perspective? We have expensive 3-D motion-picture cameras now and 3-D displays, but we don't have much media for them," says Gharib, who earlier this year formed a new company called Apertura Imaging to try to improve the 3-D imaging technology for these nondental applications. "Once we build this new camera, people will come up with all sorts of applications," he says.

Writer: 
Michael Rogers
Writer: 
Exclude from News Hub: 
No

More Evidence for an Ancient Grand Canyon

Caltech study supports theory that giant gorge dates back to Late Cretaceous period

For over 150 years, geologists have debated how and when one of the most dramatic features on our planet—the Grand Canyon—was formed. New data unearthed by researchers at the California Institute of Technology (Caltech) builds support for the idea that conventional models, which say the enormous ravine is 5 to 6 million years old, are way off.

In fact, the Caltech research points to a Grand Canyon that is many millions of years older than previously thought, says Kenneth A. Farley, Keck Foundation Professor of Geochemistry at Caltech and coauthor of the study. "Rather than being formed within the last few million years, our measurements suggest that a deep canyon existed more than 70 million years ago," he says.

Farley and Rebecca Flowers—a former postdoctoral scholar at Caltech who is now an assistant professor at the University of Colorado, Boulder—outlined their findings in a paper published in the November 29 issue of Science Express.

Building upon previous research by Farley's lab that showed that parts of the eastern canyon are likely to be at least 55 million years old, the team used a new method to test ancient rocks found at the bottom of the canyon's western section. Past experiments used the amount of helium produced by radioactive decay in apatite—a mineral found in the canyon's walls—to date the samples. This time around, Farley and Flowers took a closer look at the apatite grains by analyzing not only the amount but also the spatial distribution of helium atoms that were trapped within the crystals of the mineral as they moved closer to the surface of the earth during the massive erosion that caused the Grand Canyon to form.

Rocks buried in the earth are hot—with temperatures increasing by about 25 degrees Celsius for every kilometer of depth—but as a river canyon erodes the surface downwards towards a buried rock, that rock cools. The thermal history—shown by the helium distribution in the apatite grains—gives important clues about how much time has passed since there was significant erosion in the canyon.   

"If you can document cooling through temperatures only a few degrees warmer than the earth's surface, you can learn about canyon formation," says Farley, who is also chair of the Division of Geological and Planetary Sciences at Caltech.

The analysis of the spatial distribution of helium allowed for detection of variations in the thermal structure at shallow levels of Earth's crust, says Flowers. That gave the team dates that enabled them to fine-tune the timeframe when the Grand Canyon was incised, or cut.

"Our research implies that the Grand Canyon was directly carved to within a few hundred meters of its modern depth by about 70 million years ago," she says.

Now that they have narrowed down the "when" of the Grand Canyon's formation, the geologists plan to continue investigations into how it took shape. The genesis of the canyon has important implications for understanding the evolution of many geological features in the western United States, including their tectonics and topography, according to the team.

"Our major scientific objective is to understand the history of the Colorado Plateau—why does this large and unusual geographic feature exist, and when was it formed," says Farley. "A canyon cannot form without high elevation—you don't cut canyons in rocks below sea level. Also, the details of the canyon's incision seem to suggest large-scale changes in surface topography, possibly including large-scale tilting of the plateau."

"Apatite 4He/3He and (U-Th)/He evidence for an ancient Grand Canyon" appears in the November 29 issue of the journal Science Express. Funding for the research was provided by the National Science Foundation. 

Writer: 
Katie Neith
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - research_news