NuSTAR Space Telescope Blasts Off

Caltech-led mission will explore the X-ray universe

This morning, NASA's NuSTAR telescope was launched into the low-Earth orbit from which it will begin exploring the high-energy X-ray universe to uncover the secrets of black holes, the dense remnants of dead stars, energetic cosmic explosions, and even our very own sun.   

The space telescope—the most powerful high-energy X-ray telescope ever developed—rode toward its destination inside the nose of a Pegasus rocket strapped onto the belly of a "Stargazer" L-1011 aircraft. Around 9:00 a.m. (PDT), the plane—which had earlier taken off from the Kwajalein Atoll in the western Pacific—dropped the rocket from an altitude of 39,000 feet. The rocket was in free fall for about five seconds before the first of its three stages ignited, blasting NuSTAR into orbit around the equator.

"NuSTAR will open a whole new window on the universe by being the very first telescope to focus high-energy X rays," says Fiona Harrison, professor of physics and astronomy at Caltech and the principal investigator of the NuSTAR mission. The telescope is 100 times more sensitive than any previous high-energy X-ray telescope, and it will make images that are 10 times sharper than any that have been taken before at these energies, she says. 

NuSTAR—short for "Nuclear Spectroscopic Telescope Array"—can make sensitive observations of cosmic phenomena at higher frequencies than other X-ray telescopes now in orbit, including NASA's Chandra X-Ray Observatory and the European Space Agency's XMM-Newton observatory. X rays are at the high-frequency end of the electromagnetic spectrum, and can be thought of as light particles called photons that carry a lot of energy. Chandra and XMM-Newton can detect X rays with energies up to about 10,000 electron volts (eV); NuSTAR will be able to see photons with energies up to 78,000 eV. By comparison, the energy of visible light is just a few eVs, while the X rays used to check for broken bones have energies on the order of hundreds of thousands of eVs.

High-energy X rays can penetrate skin and muscle because they carry so much energy. But that also means that they are hard to reflect. The mirrors in optical telescopes cannot be used to reflect and focus X rays. Instead, X-ray telescopes can only reflect incoming X rays at very shallow angles. These photons travel on paths that are almost parallel to the reflective surface, like rocks skipping on a pond. To reflect enough X rays for the detectors to observe, telescopes must use nested cylindrical shells that focus the photons onto a detector. Chandra has four such nested shells; XMM-Newton has 58. NuSTAR, in contrast, has 133, providing unprecedented sensitivity and resolution.

Each of NuSTAR's nested shells is coated with about two hundred thin reflective layers, some merely a few atoms thick, composed of either high-density materials, such as tungsten and platinum, or low-density materials like silicon and carbon. By alternating the two types of layers, NuSTAR's designers have produced a crystal-like structure that reflects high-energy X rays. Harrison and her group at Caltech started developing this technology more than 15 years ago and first tested it in a balloon experiment called the High-Energy Focusing Telescope (HEFT) in 2005. 

Fiona Harrison, professor of physics and astronomy at Caltech, is the principal investigator of the NuSTAR mission.
Credit: Lance Hayashida

A telescope focuses light by bending it so that it converges onto one spot—an eyepiece or a detector. But because X rays can only be reflected at such shallow angles, they do not converge very strongly. As a result, the distance between an X-ray telescope's mirrors and the detector must be especially long for the X rays to focus. Chandra is 45 feet long and XMM-Newton is about 30 feet long—as big as buses. NuSTAR—funded under NASA's Explorers program, which emphasizes smaller, cheaper missions that do science as efficiently as possible—has a deployable mast that allows it to squeeze inside the Pegasus rocket's roughly seven-foot-long payload compartment. About a week after launch, the mast will unfold and stretch to more than 30 feet. 

This new technology, Harrison explains, "will allow NuSTAR to study some of the hottest, densest, and most energetic phenomena in the universe." Black holes are a key target of the telescope. Just 20 years ago, she says, black holes were thought to be rare and exotic, but "today we know that every galaxy has a massive black hole in its heart." Our own Milky Way galaxy, with a black hole four million times as massive as our sun, is no exception. Gas and dust block most of our view of the galactic center, but by observing in high-energy X rays, NuSTAR can peer directly into the heart of the Milky Way. 

Disks of gas and dust surround many of the supermassive black holes of other galaxies. As the material spirals into these black holes, which are millions to billions of times as massive as the sun, the regions closest to the black hole radiate prodigious amounts of high-energy X rays, which are visible even if the black hole is hidden behind dust and gas. NuSTAR will therefore allow astronomers to not only conduct a census of all the black holes in the cosmic neighborhood, but also to study the extreme environments around the black holes. Astronomers will even be able to measure how fast black holes spin, which is important for understanding how they form and their role in the history and evolution of their host galaxies. 

Astronomers will also point NuSTAR at supernovae remnants, the hot embers left over from exploded stars. After a star burns through all of its fuel, it blows up, blasting material out into space. In that explosion, new elements are formed (in fact, many of the heavier elements on Earth were forged long ago in stars and supernovae). Some newborn atoms are radioactive, and NuSTAR will be able to detect this radioactivity, allowing astronomers to probe what happens during the fiery death of a star. 

The telescope also will devote some time to the observation of our own star, the sun. The outer layer of the sun, called the corona, burns at millions of degrees. Some scientists speculate that nanoflares—smaller versions of the solar flares that occasionally erupt from the solar surface—keep the corona hot. NuSTAR may be able to see nanoflares for the first time. "In a few hours of observations, NuSTAR will answer this longstanding question that solar physicists have been scratching their heads about for years," says Daniel Stern of NASA's Jet Propulsion laboratory, NuSTAR's project scientist.

In July, NuSTAR will start taking data, revealing a whole new X-ray universe—shining, shimmering, and splendid—to scientists. "We expect amazing discoveries from it," Stern says.

The NuSTAR mission is led by Caltech and managed by JPL for NASA.

Writer: 
Marcus Woo
Writer: 

Caltech Geologists Discover Ancient Buried Canyon in South Tibet

A team of researchers from Caltech and the China Earthquake Administration has discovered an ancient, deep canyon buried along the Yarlung Tsangpo River in south Tibet, north of the eastern end of the Himalayas. The geologists say that the ancient canyon—thousands of feet deep in places—effectively rules out a popular model used to explain how the massive and picturesque gorges of the Himalayas became so steep, so fast.

"I was extremely surprised when my colleagues, Jing Liu-Zeng and Dirk Scherler, showed me the evidence for this canyon in southern Tibet," says Jean-Philippe Avouac, the Earle C. Anthony Professor of Geology at Caltech. "When I first saw the data, I said, 'Wow!' It was amazing to see that the river once cut quite deeply into the Tibetan Plateau because it does not today. That was a big discovery, in my opinion." 

Geologists like Avouac and his colleagues, who are interested in tectonics—the study of the earth's surface and the way it changes—can use tools such as GPS and seismology to study crustal deformation that is taking place today. But if they are interested in studying changes that occurred millions of years ago, such tools are not useful because the activity has already happened. In those cases, rivers become a main source of information because they leave behind geomorphic signatures that geologists can interrogate to learn about the way those rivers once interacted with the land—helping them to pin down when the land changed and by how much, for example.

"In tectonics, we are always trying to use rivers to say something about uplift," Avouac says. "In this case, we used a paleocanyon that was carved by a river. It's a nice example where by recovering the geometry of the bottom of the canyon, we were able to say how much the range has moved up and when it started moving."

The team reports its findings in the current issue of Science.

Last year, civil engineers from the China Earthquake Administration collected cores by drilling into the valley floor at five locations along the Yarlung Tsangpo River. Shortly after, former Caltech graduate student Jing Liu-Zeng, who now works for that administration, returned to Caltech as a visiting associate and shared the core data with Avouac and Dirk Scherler, then a postdoc in Avouac's group. Scherler had previously worked in the far western Himalayas, where the Indus River has cut deeply into the Tibetan Plateau, and immediately recognized that the new data suggested the presence of a paleocanyon.

Liu-Zeng and Scherler analyzed the core data and found that at several locations there were sedimentary conglomerates, rounded gravel and larger rocks cemented together, that are associated with flowing rivers, until a depth of 800 meters or so, at which point the record clearly indicated bedrock. This suggested that the river once carved deeply into the plateau.

To establish when the river switched from incising bedrock to depositing sediments, they measured two isotopes, beryllium-10 and aluminum-26, in the lowest sediment layer. The isotopes are produced when rocks and sediment are exposed to cosmic rays at the surface and decay at different rates once buried, and so allowed the geologists to determine that the paleocanyon started to fill with sediment about 2.5 million years ago.

The researchers' reconstruction of the former valley floor showed that the slope of the river once increased gradually from the Gangetic Plain to the Tibetan Plateau, with no sudden changes, or knickpoints. Today, the river, like most others in the area, has a steep knickpoint where it meets the Himalayas, at a place known as the Namche Barwa massif. There, the uplift of the mountains is extremely rapid (on the order of 1 centimeter per year, whereas in other areas 5 millimeters per year is more typical) and the river drops by 2 kilometers in elevation as it flows through the famous Tsangpo Gorge, known by some as the Yarlung Tsangpo Grand Canyon because it is so deep and long.

Combining the depth and age of the paleocanyon with the geometry of the valley, the geologists surmised that the river existed in this location prior to about 3 million years ago, but at that time, it was not affected by the Himalayas. However, as the Indian and Eurasian plates continued to collide and the mountain range pushed northward, it began impinging on the river. Suddenly, about 2.5 million years ago, a rapidly uplifting section of the mountain range got in the river's way, damming it, and the canyon subsequently filled with sediment.

"This is the time when the Namche Barwa massif started to rise, and the gorge developed," says Scherler, one of two lead authors on the paper and now at the GFZ German Research Center for Geosciences in Potsdam, Germany.

That picture of the river and the Tibetan Plateau, which involves the river incising deeply into the plateau millions of years ago, differs quite a bit from the typically accepted geologic vision. Typically, geologists believe that when rivers start to incise into a plateau, they eat at the edges, slowly making their way into the plateau over time. However, the rivers flowing across the Himalayas all have strong knickpoints and have not incised much at all into the Tibetan Plateau. Therefore, the thought has been that the rapid uplift of the Himalayas has pushed the rivers back, effectively pinning them, so that they have not been able to make their way into the plateau. But that explanation does not work with the newly discovered paleocanyon.

The team's new hypothesis also rules out a model that has been around for about 15 years, called tectonic aneurysm, which suggests that the rapid uplift seen at the Namche Barwa massif was triggered by intense river incision. In tectonic aneurysm, a river cuts down through the earth's crust so fast that it causes the crust to heat up, making a nearby mountain range weaker and facilitating uplift.

The model is popular among geologists, and indeed Avouac himself published a modeling paper in 1996 that showed the viability of the mechanism. "But now we have discovered that the river was able to cut into the plateau way before the uplift happened," Avouac says, "and this shows that the tectonic aneurysm model was actually not at work here. The rapid uplift is not a response to river incision."

The other lead author on the paper, "Tectonic control of Yarlung Tsangpo Gorge revealed by a buried canyon in Southern Tibet," is Ping Wang of the State Key Laboratory of Earthquake Dynamics, in Beijing, China. Additional authors include Jürgen Mey, of the University of Potsdam, in Germany; and Yunda Zhang and Dingguo Shi of the Chengdu Engineering Corporation, in China. The work was supported by the National Natural Science Foundation of China, the State Key Laboratory for Earthquake Dynamics, and the Alexander von Humboldt Foundation. 

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Heat Transfer Sets the Noise Floor for Ultrasensitive Electronics

A team of engineers and scientists has identified a source of electronic noise that could affect the functioning of instruments operating at very low temperatures, such as devices used in radio telescopes and advanced physics experiments.

The findings, detailed in the November 10 issue of the journal Nature Materials, could have implications for the future design of transistors and other electronic components.

The electronic noise the team identified is related to the temperature of the electrons in a given device, which in turn is governed by heat transfer due to packets of vibrational energy, called phonons, that are present in all crystals. "A phonon is similar to a photon, which is a discrete packet of light," says Austin Minnich, an assistant professor of mechanical engineering and applied physics in Caltech's Division of Engineering and Applied Science and corresponding author of the new paper. "In many crystals, from ordinary table salt to the indium phosphide crystals used to make transistors, heat is carried mostly by phonons."

Phonons are important for electronics because they help carry away the thermal energy that is injected into devices in the form of electrons. How swiftly and efficiently phonons ferry away heat is partly dependent on the temperature at which the device is operated: at high temperatures, phonons collide with one another and with imperfections in the crystal in a phenomenon called scattering, and this creates phonon traffic jams that result in a temperature rise.

One way that engineers have traditionally reduced phonon scattering is to use high-quality materials that contain as few defects as possible. "The fewer defects you have, the fewer 'road blocks' there are for the moving phonons," Minnich says.

A more common solution, however, is to operate electronics in extremely cold conditions because scattering drops off dramatically when the temperature dips below about 50 kelvins, or about –370 degrees Fahrenheit. "As a result, the main strategy for reducing noise is to operate the devices at colder and colder temperatures," Minnich says.

But the new findings by Minnich's team suggest that while this strategy is effective, another phonon transfer mechanism comes into play at extremely low temperatures and severely restricts the heat transfer away from a device.

Using a combination of computer simulations and real-world experiments, Minnich and his team showed that at around 20 kelvins, or –424 degrees Fahrenheit, the high-energy phonons that are most efficient at transporting heat away quickly are unlikely to be present in a crystal. "At 20 kelvins, many phonon modes become deactivated, and the crystal has only low-energy phonons that don't have enough energy to carry away the heat," Minnich says. "As a result, the transistor heats up until the temperature has increased enough that high-energy phonons become available again."

As an analogy, Minnich says to imagine an object that is heated until it is white hot. "When something is white hot, the full spectrum of photons, from red to blue, contribute to the heat transfer, and we know from everyday experience that something white hot is extremely hot," he says. "When something is not as hot it glows red, and in this case heat is only carried by red photons with low energy. The physics for phonons is exactly the same—even the equations are the same."

The electronic noise that the team identified has been known about for many years, but until now it was not thought to play an important role at low temperatures. That discovery happened because of a chance encounter between Minnich and Joel Schleeh, a postdoctoral scholar from Chalmers University of Technology in Sweden and first author of the new study, who was at Caltech visiting the lab of Sander Weinreb, a senior faculty associate in electrical engineering.

Schleeh had noticed that the noise he was measuring in an amplifier was higher than what theory predicted. Schleeh mentioned the problem to Weinreb, and Weinreb recommended he connect with Minnich, whose lab studies heat transfer by phonons. "At another university, I don't think I would have had this chance," Minnich says. "Neither of us would have had the chance to interact like we did here. Caltech is a small campus, so when you talk to someone, almost by definition they're outside of your field."

The pair's findings could have implications for numerous fields of science that rely on superchilled instruments to make sensitive measurements. "In radio astronomy, you're trying to detect very weak electromagnetic waves from space, so you need the lowest noise possible," Minnich says.

Electronic noise poses a similar problem for quantum-physics experiments. "Here at Caltech, we have physicists trying to observe certain quantum-physics effects. The signal that they're looking for is very tiny, and it's essential to use the lowest-noise electronics possible," Minnich says.

The news is not all gloomy, however, because the team's findings also suggest that it may be possible to develop engineering strategies to make phonon heat transfer more efficient at low temperatures. For example, one possibility might be to change the design of transistors so that phonon generation takes place over a broader volume. "If you can make the phonon generation more spread out, then in principle you could reduce the temperature rise that occurs," Minnich says.

"We don't know what the precise strategy will be yet, but now we know the direction we should be going. That's an improvement."

In addition to Minnich and Schleeh, the other coauthors of the paper, "Phonon blackbody radiation limit for heat dissipation in electronics," are Javier Mateos and Ignacio Iñiguez-de-la-Torre of the Universidad de Salamanca in Salamanca, Spain; Niklas Wadefalk of the Low Noise Factory AB in Mölndal, Sweden; and Per A. Nilsson and Jan Grahn of Chalmers University of Technology. Minnich's work on the project at Caltech was funded by a Caltech start-up fund and by the National Science Foundation.

Written by Ker Than

Writer: 
Ker Than
Contact: 
Exclude from News Hub: 
No

Robotic Ocean Gliders Aid Study of Melting Polar Ice

The rapidly melting ice sheets on the coast of West Antarctica are a potential major contributor to rising ocean levels worldwide. Although warm water near the coast is thought to be the main factor causing the ice to melt, the process by which this water ends up near the cold continent is not well understood.

Using robotic ocean gliders, Caltech researchers have now found that swirling ocean eddies, similar to atmospheric storms, play an important role in transporting these warm waters to the Antarctic coast—a discovery that will help the scientific community determine how rapidly the ice is melting and, as a result, how quickly ocean levels will rise.

Their findings were published online on November 10 in the journal Nature Geoscience.

"When you have a melting slab of ice, it can either melt from above because the atmosphere is getting warmer or it can melt from below because the ocean is warm," explains lead author Andrew Thompson, assistant professor of environmental science and engineering. "All of our evidence points to ocean warming as the most important factor affecting these ice shelves, so we wanted to understand the physics of how the heat gets there."

Ordinarily when oceanographers like Thompson want to investigate such questions, they use ships to lower instruments through the water or they collect ocean temperature data from above with satellites. These techniques are problematic in the Southern Ocean. "Observationally, it's a very hard place to get to with ships. Also, the warm water is not at the surface, making satellite observations ineffective," he says.

Because the gliders are small—only about six feet long—and are very energy efficient, they can sample the ocean for much longer periods than large ships can. When the glider surfaces every few hours, it "calls" the researchers via a mobile phone–like device located on the tail. This communication allows the researchers to almost immediately access the information the glider has collected.

Like airborne gliders, the bullet-shaped ocean gliders have no propeller; instead they use batteries to power a pump that changes the glider's buoyancy. When the pump pushes fluid into a compartment inside the glider, the glider becomes denser than seawater and less buoyant, thus causing it to sink. If the fluid is pumped instead into a bladder on the outside of the glider, the glider becomes less dense than seawater—and therefore more buoyant—ultimately rising to the surface. Like airborne gliders, wings convert this vertical lift into horizontal motion.

Thompson and his colleagues from the University of East Anglia dropped their gliders into the ocean off the coast of the Antarctic Peninsula in January 2012; the robotic vehicles then spent the next two months moving up and down through the water column—diving a kilometer below the surface of the water and back up again every few hours—exploring the Weddell Sea off the coast of Antarctica. As the gliders traveled, they collected temperature and salinity data at different locations and depths of the sea.

The glider's up and down capability is important for studying ocean stratification, or how water characteristics, such as density, change with depth, Thompson says. "If it was only temperature that determined density, you'd always have warm water at the top and cold water at the bottom. But in the ocean you also have to factor in salinity; the higher the salinity is in the water, the more dense that water is and the more likely it is to sink to the bottom," he says.

In Antarctica the combined effects of temperature and salinity create an interesting situation, in which the warmest water is not on top, but actually sandwiched in the middle layers of the water column. "That's an additional problem in understanding the heat transport in this region," he adds. You can't just take measurements at the surface, he says. "You actually need to be taking a look at that very warm temperature layer, which happens to sit in the middle of the water column. That's the layer that is actually moving toward the ice shelf."

The results from the gliders revealed that the heat was actually coming from a less predictable source: eddies, swirling underwater storms that are caused by ocean currents.

"Eddies are instabilities that are caused by ocean currents, and we often compare their effect on the ocean to putting a spoon in your coffee," Thompson says. "If you pour milk in your coffee and then you stir it with a spoon, the spoon enhances your ability to mix the milk into the coffee and that is what these eddies do. They are very good at mixing heat and other properties."

Because the gliders could dive and surface every few hours and remain at sea for months, they were able to see these eddies in action—something that ships and satellites had previously been unable to capture.

"Ocean currents are variable, and so if you go just one time, what you measure might not be what the current looks like a day later. It's sort of like the weather—you know it's going to be warm in the summer and cold in the winter, but on a day-to-day basis it could be cold in the summer just because a storm came in," Thompson says. "Eddies do the same thing in the ocean, so unless you understand how the temperature of currents is changing from day to day—information we can actually collect with the gliders—then you can't understand what the long-term heat transport is."

In future work, Thompson plans to couple meteorological data with the data collected from his gliders. In December, the team will use ocean gliders to study a rough patch of ocean between the southern tip of South America and Antarctica, called the Drake Passage, as a surface robot, called a Waveglider, collects information from the surface of the water. "With the Waveglider, we can measure not just the ocean properties, but atmospheric properties as well, such as wind speed and wind direction. So we'll get to actually see what's happening at the air-sea interface."

In the Drake Passage, deep waters from the Southern Ocean are "ventilated"—or emerge at the surface—a phenomenon specific to this region of the ocean. That makes the location important for understanding the exchange of carbon dioxide between the atmosphere and the ocean. "The Southern Ocean is the window through which deep waters can actually come up to 'see' the atmosphere"—and it's also a window for oceanographers to more easily see the deep ocean, he says. "It's a very special place for many reasons."

The work with ocean gliders was published in a paper titled "Eddy transport as a key component of the Antarctic overturning circulation." Other authors on the paper include Karen J. Heywood of the University of East Anglia, Sunke Schmidtko of GEOMAR Helmholtz Centre for Ocean Research Kiel, Germany, and Andrew Stewart, a former postdoctoral scholar at Caltech who is now at UCLA. Thompson's glider work was supported by an award from the National Science Foundation and the UK's Natural Environment Research Council; Stewart was supported by the President's and Director's Fund program at Caltech.

Writer: 
Exclude from News Hub: 
No

Unexpected Findings Change the Picture of Sulfur on the Early Earth

Scientists believe that until about 2.4 billion years ago there was little oxygen in the atmosphere—an idea that has important implications for the evolution of life on Earth. Evidence in support of this hypothesis comes from studies of sulfur isotopes preserved in the rock record. But the sulfur isotope story has been uncertain because of the lack of key information that has now been provided by a new analytical technique developed by a team of Caltech geologists and geochemists. The story that new information reveals, however, is not what most scientists had expected.

"Our new technique is 1,000 times more sensitive for making sulfur isotope measurements," says Jess Adkins, professor of geochemistry and global environmental science at Caltech. "We used it to make measurements of sulfate groups dissolved in carbonate minerals deposited in the ocean more than 2.4 billion years ago, and those measurements show that we have been thinking about this part of the sulfur cycle and sulfur isotopes incorrectly."

The team describes their results in the November 7 issue of the journal Science. The lead author on the paper is Guillaume Paris, an assistant research scientist at Caltech.

Nearly 15 years ago, a team of geochemists led by researchers at UC San Diego discovered there was something peculiar about the sulfur isotope content of rocks from the Archean era, an interval that lasted from 3.8 billion to about 2.4 billion years ago. In those ancient rocks, the geologists were analyzing the abundances of stable isotopes of sulfur.

When sulfur is involved in a reaction—such as microbial sulfate reduction, a way for microbes to eat organic compounds in the absence of oxygen—its isotopes are usually fractionated, or separated, from one another in proportion to their differences in mass. That is, 34S gets fractionated from 32S about twice as much as 33S gets fractionated from 32S. This process is called mass-dependent fractionation, and, scientists have found that it dominates in virtually all sulfur processes operating on Earth's surface for the last 2.4 billion years.

However, in older rocks from the Archean era (i.e., older than 2.4 billion years), the relative abundances of sulfur isotopes do not follow the same mass-related pattern, but instead show relative enrichments or deficiencies of 33S relative to 34S. They are said to be the product of mass-independent fractionation (MIF).

The widely accepted explanation for the occurrence of MIF is as follows. Billions of years ago, volcanism was extremely active on Earth, and all those volcanoes spewed sulfur dioxide high into the atmosphere. At that time, oxygen existed at very low levels in the atmosphere, and therefore ozone, which is produced when ultraviolet radiation strikes oxygen, was also lacking. Today, ozone prevents ultraviolet light from reaching sulfur dioxide with the energy needed to fractionate sulfur, but on the early Earth, that was not the case, and MIF is the result. Researchers have been able to reproduce this effect in the lab by shining lasers onto sulfur dioxide and producing MIF.

Geologists have also measured the sulfur isotopic composition of sedimentary rocks dating to the Archean era, and found that sulfides—sulfur-bearing compounds such as pyrite (FeS2)—include more 33S than would be expected based on normal mass-dependent processes. But if those minerals are enriched in 33S, other minerals must be correspondingly lacking in the isotope. According to the leading hypothesis, those 33S-deficient minerals should be sulfates—oxidized sulfur-bearing compounds—that were deposited in the Archean ocean.

"That idea was put forward on the basis of experiment. To test the hypothesis, you'd need to check the isotope ratios in sulfate salts (minerals such as gypsum), but those don't really exist in the Archean rock record since there was very little oxygen around," explains Woody Fischer, professor of geobiology at Caltech and a coauthor on the new paper. "But there are trace amounts of sulfate that got trapped in carbonate minerals in seawater."

However, because those sulfates are present in such small amounts, no one has been able to measure well their isotopic composition. But using a device known as a multicollector inductively-coupled mass spectrometer to precisely measure multiple sulfur isotopes, Adkins and his colleague Alex Sessions, a professor of geobiology, developed a method that is sensitive enough to measure the isotopic composition of about 10 nanomoles of sulfate in just a few tens of milligrams of carbonate material.

The authors used the method to measure the sulfate content of carbonates from an ancient carbonate platform preserved in present-day South Africa, an ancient version of the depositional environments found in the Bahamas today. Analyzing the samples, which spanned 70 million years and a variety of marine environments, the researchers found exactly the opposite of what had been predicted: the sulfates were actually enriched by 33S rather than lacking in it.

"Now, finally, we're looking at this sulfur cycle and the sulfur isotopes correctly," Adkins says.

What does this mean for the atmospheric conditions of the early Earth? "Our findings underscore that the oxygen concentrations in the early atmosphere could have been incredibly low," Fischer says.

Knowledge of sulfate isotopes changes how we understand the role of biology in the sulfur cycle, he adds. Indeed, the fact that the sulfates from this time period have the same isotopic composition as sulfide minerals suggests that the sulfides may be the product of microbial processes that reduced seawater sulfate to sulfide (which later precipitated in sediments in the form of pyrite). Previously, scientists thought that all of the isotope fractionation could be explained by inorganic processes alone.

In a second paper also in the November 7 issue of Science, Paris, Adkins, Sessions, and colleagues from a number of institutions around the world report on related work in which they measured the sulfates in Indonesia's Lake Matano, a low-sulfate analog of the Archean ocean.

At about 100 meters depth, the bacterial communities in Lake Matano begin consuming sulfate rather than oxygen, as do most microbial communities, yielding sulfide. The researchers measured the sulfur isotopes within the sulfates and sulfides in the lake water and sediments and found that despite the low concentrations of sulfate, a lot of mass-dependent fractionation was taking place. The researchers used the data to build a model of the lake's sulfur cycle that could produce the measured fractionation, and when they applied their model to constrain the range of concentrations of sulfate in the Archean ocean, they found that the concentration was likely less than 2.5 micromolar, 10,000 times lower than the modern ocean.

"At such low concentration, all the isotopic variability starts to fit," says Adkins. "With these two papers, we were able to come at the same problem in two ways—by measuring the rocks dating from the Archean and by looking at a model system today that doesn't have much sulfate—and they point toward the same answer: the sulfate concentration was very low in the Archean ocean."

Samuel M. Webb of the Stanford Synchrotron Radiation Lightsource is also an author on the paper, "Neoarchean carbonate-associated sulfate records positive Δ33S anomalies." The work was supported by funding from the National Science Foundation's Division of Earth Sciences, the Henry and Camille Dreyfus Foundation's Postdoctoral Program in Environmental Chemistry, and the David and Lucile Packard Foundation.

Paris is also a co-lead author on the second paper, "Sulfate was a trace constituent of Archean seawater." Additional authors on that paper are Sean Crowe and CarriAyne Jones of the University of British Columbia and the University of Southern Denmark; Sergei Katsev of the University of Minnesota Duluth; Sang-Tae Kim of McMaster University; Aubrey Zerkle of the University of St. Andrews; Sulung Nomosatryo of the Indonesian Institute of Sciences; David Fowle of the University of Kansas; James Farquhar of the University of Maryland, College Park; and Donald Canfield of the University of Southern Denmark. Funding was provided by an Agouron Institute Geobiology Fellowship and a Natural Sciences and Engineering Research Council of Canada Postdoctoral Fellowship, as well as by the Danish National Research Foundation and the European Research Council.

Writer: 
Exclude from News Hub: 
No

Caltech Rocket Experiment Finds Surprising Cosmic Light

Using an experiment carried into space on a NASA suborbital rocket, astronomers at Caltech and their colleagues have detected a diffuse cosmic glow that appears to represent more light than that produced by known galaxies in the universe.

The researchers, including Caltech Professor of Physics Jamie Bock and Caltech Senior Postdoctoral Fellow Michael Zemcov, say that the best explanation is that the cosmic light—described in a paper published November 7 in the journal Science—originates from stars that were stripped away from their parent galaxies and flung out into space as those galaxies collided and merged with other galaxies.

The discovery suggests that many such previously undetected stars permeate what had been thought to be dark spaces between galaxies, forming an interconnected sea of stars. "Measuring such large fluctuations surprised us, but we carried out many tests to show the results are reliable," says Zemcov, who led the study.

Although they cannot be seen individually, "the total light produced by these stray stars is about equal to the background light we get from counting up individual galaxies," says Bock, also a senior research scientist at JPL. Bock is the principal investigator of the rocket project, called the Cosmic Infrared Background Experiment, or CIBER, which originated at Caltech and flew on four rocket flights from 2009 through 2013.

In earlier studies, NASA's Spitzer Space Telescope, which sees the universe at longer wavelengths, had observed a splotchy pattern of infrared light called the cosmic infrared background. The splotches are much bigger than individual galaxies. "We are measuring structures that are grand on a cosmic scale," says Zemcov, "and these sizes are associated with galaxies bunching together on a large-scale pattern." Initially some researchers proposed that this light came from the very first galaxies to form and ignite stars after the Big Bang. Others, however, have argued the light originated from stars stripped from galaxies in more recent times.

CIBER was designed to help settle the debate. "CIBER was born as a conversation with Asantha Cooray, a theoretical cosmologist at UC Irvine and at the time a postdoc at Caltech with [former professor] Marc Kamionkowski," Bock explains. "Asantha developed an idea for studying galaxies by measuring their large-scale structure. Galaxies form in dark-matter halos, which are over-dense regions initially seeded in the early universe by inflation. Furthermore, galaxies not only start out in these halos, they tend to cluster together as well. Asantha had the brilliant idea to measure this large-scale structure directly from maps. Experimentally, it is much easier for us to make a map by taking a wide-field picture with a small camera, than going through and measuring faint galaxies one by one with a large telescope." 

Cooray originally developed this approach for the longer infrared wavelengths observed by the European Space Agency's Herschel Space Observatory. "With its 3.5-meter diameter mirror, Herschel is too small to count up all the galaxies that make the infrared background light, so he instead obtained this information from the spatial structure in the map," Bock says. 

"Meanwhile, I had been working on near-infrared rocket experiments, and was interested in new ways to use this unique idea to study the extragalactic background," he says. The extragalactic infrared background represents all of the infrared light from all of the sources in the universe, "and there were some hints we didn't know where it was all coming from."

In other words, if you calculate the light produced by individual galaxies, you would find they made less than the background light. "One could try and measure the total sky brightness directly," Bock says, "but the problem is that the foreground 'Zodiacal light,' due to dust in the solar system reflecting light from the sun, is so bright that it is hard to subtract with enough accuracy to measure the extragalactic background. So we put these two ideas together, applying Asantha's mapping approach to new wavelengths, and decided that the best way to get at the extragalactic background was to measure spatial fluctuations on angular scales around a degree. That led to CIBER."

The CIBER experiment consists of three instruments, including two spectrometers to determine the brightness of Zodiacal light and measure the cosmic infrared background directly. The measurements in the recent publication are made with two wide-field cameras to search for fluctuations in two wavelengths of near infrared light. Earth's upper atmosphere glows brightly at the CIBER wavelengths. But the measurements can be done in space—avoiding that glow—in just the short amount of time that a suborbital rocket flies above the atmosphere, before descending again back toward the planet.

CIBER flew four missions in all; the paper includes results from the second and third of CIBER's flights, launched in 2010 and 2012 from White Sands Missile Range in New Mexico and recovered afterward by parachute. In the flights, the researchers observed the same part of the sky at a different time of year, and swapped the detector arrays as a crosscheck against data artifacts created by the sensors. "This series of flights was quite helpful in developing complete confidence in the results," says Zemcov. "For the final flight, we decided to get more time above the atmosphere and went with a non-recovered flight into the Atlantic Ocean on a four-stage rocket." (The data from the fourth flight will be discussed in a future paper.)

Based on data from these two launches, the researchers found fluctuations, but they had to go through a careful process to identify and remove local sources, such as the instrument, as well as emissions from the solar system, stars, scattered starlight in the Milky Way, and known galaxies. What is left behind is a splotchy pattern representing fluctuations in the remaining infrared background light. Comparing data from multiple rocket launches, they saw the identical signal. That signal also is observed by comparing CIBER and Spitzer images of the same region of sky. Finally, the team measured the color of the fluctuations by comparing the CIBER results to Spitzer measurements at longer wavelengths. The result is a spectrum with a very blue color, brightest in the CIBER bands.

"CIBER tells us a couple key facts," Zemcov explains. "The fluctuations seem to be too bright to be coming from the first galaxies. You have to burn a large quantity of hydrogen into helium to get that much light, then you have to hide the evidence, because we don't see enough heavy elements made by stellar nucleosynthesis"—the process, occurring within stars, by which heavier elements are created from the fusion of lighter ones—"which means these elements would have to disappear into black holes." 

"The color is also too blue," he says. "First galaxies should appear redder due to their light being absorbed by hydrogen, and we do not see any evidence for such an absorption feature."

In short, Zemcov says, "although we designed our experiment to search for emission from first stars and galaxies, that explanation doesn't fit our data very well. The best interpretation is that we are seeing light from stars outside of galaxies but in the same dark matter halos. The stars have been stripped from their parent galaxies by gravitational interactions—which we know happens from images of interacting galaxies—and flung out to large distances."

The model, Bock admits, "isn't perfect. In fact, the color still isn't quite blue enough to match the data. But even so, the brightness of the fluctuations implies this signal is important in a cosmological sense, as we are tracing a large amount of cosmic light production." 

Future experiments could test whether stray stars are indeed the source of the infrared cosmic glow, the researchers say. If the stars were tossed out from their parent galaxies, they should still be located in the same vicinity. The CIBER team is working on better measurements using more infrared colors to learn how the stripping of stars happened over cosmic history.

In addition to Bock, Zemcov, and Cooray, other coauthors of the paper, "On the Origin of Near-Infrared Extragalactic Background Light Anisotropy," are Joseph Smidt of Los Alamos National Laboratory; Toshiaki Arai, Toshio Matsumoto, Shuji Matsuura, and Takehiko Wada of the Japan Aerospace Exploration Agency; Yan Gong of UC Irvine; Min Gyu Kim of Seoul National University; Phillip Korngut, a postdoctoral scholar at Caltech; Anson Lam of UCLA; Dae Hee Lee and Uk Won Nam of the Korea Astronomy and Space Science Institute (KASI); Gael Roudier of JPL; and Kohji Tsumura of Tohoku University. The work was supported by NASA, with initial support provided by JPL's Director's Research and Development Fund. Japanese participation in CIBER was supported by the Japan Society for the Promotion of Science and the Ministry of Education, Culture, Sports, Science and Technology. Korean participation in CIBER was supported by KASI. 

Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Figuring Out How We Get the Nitrogen We Need

Caltech Chemists Image Nitrogenase's Active Site At Work

Nitrogen is an essential component of all living systems, playing important roles in everything from proteins and nucleic acids to vitamins. It is the most abundant element in Earth's atmosphere and is literally all around us, but in its gaseous state, N2,, it is inert and useless to most organisms. Something has to convert, or "fix," that nitrogen into a metabolically usable form, such as ammonia. Until about 100 years ago when an industrial-scale technique called the Haber-Bosch process was developed, bacteria were almost wholly responsible for all nitrogen fixation on Earth (lightning and volcanoes fix a small amount of nitrogen). Bacteria accomplish this important chemical conversion using an enzyme called nitrogenase.

"For decades, we have been trying to understand how nitrogenase can interact with this inert gas and carry out this transformation," says Doug Rees, Caltech's Roscoe Gilkey Dickinson Professor of Chemistry and an investigator with the Howard Hughes Medical Institute (HHMI). To fix nitrogen in the laboratory, the Haber-Bosch process requires extremely high temperatures and pressures, yet bacteria are able to complete the conversion under physiological conditions. "We'd love to understand how they do this," he says. "It's a great chemical mystery."

But cracking that mystery has proven extremely difficult using standard chemical techniques. We know that the enzyme is made up of two proteins, the molybdenum iron (MoFe-) protein and the iron (Fe-) protein, which are both required for nitrogen fixation. We also know that the MoFe-protein consists of two metal centers and that one of those is the FeMo-cofactor (also known as  "the cofactor") at the active site, where the nitrogen binds and the chemical transformation takes place.

In 1992, Rees and his graduate student, Jongsun Kim (PhD '93), were the first to determine the structure of the MoFe-protein using X-ray crystallography.

"I think that there was a feeling that once you solved the structure, you'd understand how it worked," Rees says. "What we can say 22 years later is that was certainly not the case."

The dream would be to have atmospheric nitrogen bind to the FeMo-cofactor and to stop time so that chemists could sneak a peak at the chemical structure of the protein at that intermediate point. Since it is not possible to freeze time and because the reaction proceeds too quickly to study by standard crystallographic methods, researchers have come up with an alternative. Chemists have been trying to get carbon monoxide, an inhibitor that halts the enzyme's activity but also closely mimics the structure and electronic makeup of N2, to bind to the cofactor and to then crystallize the product relatively quickly so that the structure can be analyzed using X-ray crystallography.

Unfortunately, the cofactor has stubbornly refused to cooperate. "We've demonstrated more times than we'd like that the form of this protein as isolated doesn't bind substrates," explains Rees. "Usually if you want to know how something binds to a protein, you just add it to your protein and study the crystal structure with X-ray crystallography. But we just couldn't get anything bound to this cofactor."

But in order for the cofactor to exist in a form that would bind to a substrate or an inhibitor, several other conditions must be met—for example, the Fe-protein has to be there. In addition, ATP—a molecule that provides energy for many life processes—must be present, along with yet another enzyme system that regenerates the ATP consumed in the reaction and a source of electrons. So although the aim in crystallography is typically to isolate a purified protein, the chemists had to muddy their samples by adding all these other needed components.

After joining Rees's group as a postdoctoral scholar in 2012, Thomas Spatzal spent months working on this problem, tweaking the method he used for trying to get the carbon monoxide to bind to the cofactor and for crystallizing the product. He adjusted parameters such as the protein concentrations, the temperature under which the samples were prepared, and the amount of time he allowed for the crystals to form. Every week, he sent a new set of crystals, frozen with liquid nitrogen, to be analyzed on an X-ray beamline at the Stanford Synchrotron Radiation Lightsource (SSRL) constructed as part of Caltech's Molecular Observatory with support from the Gordon and Betty Moore Foundation. And every week he worked up the data that came back and looked to see if any of the carbon monoxide bound to the active site.

"People have been seeing the resting state of the active site, where nothing was bound, for years," Spatzal says. "It's always the same thing. It never looks any different."

But on a recent Friday morning, Spatzal processed the latest batch of data, and lo and behold, he finally saw what he had been looking for.

"There was a moment where I looked at it and said, 'Hold on. Something looks different there,'" says Spatzal. "I wondered, 'Am I crazy?' You just don't expect it at first."

What he saw was a first—a crystal structure revealing carbon monoxide bound to the FeMo-cofactor. Spatzal, Rees, and their colleagues describe that structure and their methodology in the September 26 issue of the journal Science.

Spatzal figured out a way to optimize the crystallization process by using tiny crystal seeds to accelerate the rate of crystal growth and conducting all manipulations in the presence of carbon monoxide, allowing him to grow nice crystals of the MoFe-protein and then to see where the carbon monoxide was bound to the cofactor.

What he found was surprising. The carbon monoxide took the place of one of the sulfur atoms in the cofactor's original structure, bridging two of its iron atoms. Many people had expected that the carbon monoxide would bind differently, so that it would stick out, adding extra density to the structure. But because it displaced the sulfur, the cofactor only took on a slightly different arrangement of atoms.

In addition, Spatzal showed that when the carbon monoxide is removed, the sulfur can reattach, reactivating the cofactor so that it can once again fix nitrogen.

"As astonishing as this structure was—that the carbon monoxide replaced the sulfur—I think it's even more astonishing that Thomas was able to establish that the cofactor could be reactivated," Rees says. "I don't think anyone had imagined that you would get this sort of rearrangement of the cofactor as part of the interaction."

"You could imagine that if you put an inhibitor on a system, it could damage the metal center and inactivate the protein so that it would no longer do its job. The fact that we can get it back into an active state means that it's not permanently damaged, and that has physiological meaning in terms of how nitrogen fixation occurs in nature," says Spatzal.

The researchers note that this result would still be a long way off without the X-ray crystallography resources of Caltech's Molecular Observatory, which has abundant dedicated time on a beamline at SSRL. "We were really fortunate that the Moore Foundation funded this access to the beamline," says Rees. "That was really essential for this project because it took a lot of optimization to work everything out. We were able to keep regularly sending samples and right away get feedback about how things were working. It's an unbelievable resource."

Additional Caltech authors on the paper, "Ligand binding to the FeMo-cofactor: Structures of CO-bound and reactivated nitrogenase," are Kathryn A. Perez, a graduate student, and James Howard, a visiting associate who is also affiliated with the University of Minnesota and where Rees was a postdoc. Oliver Einsle of the Institut fur Biochemie in Freiburg, Germany, and the Albert-Ludwigs-Universität Freiburg, was a postdoc with Rees as well as Spatzal's thesis advisor and is a coauthor on the paper. Spatzal is an associate with HHMI.

This work was supported by grants from the National Institutes of Health, Deutsche Forschungsgemeinschaft, and the European Research Council N-ABLE project. The Molecular Observatory is supported by the Gordon and Betty Moore Foundation, the Beckman Institute, and the Sanofi-Aventis Bioengineering Research Program at Caltech. Microbiology research at Caltech is supported by the Center for Environmental Microbial Interactions

Writer: 
Kimm Fesenmaier
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Sweeping Air Devices For Greener Planes

The large amount of jet fuel required to fly an airplane from point A to point B can have negative impacts on the environment and—as higher fuel costs contribute to rising ticket prices—a traveler's wallet. With funding from NASA and the Boeing Company, engineers from the Division of Engineering and Applied Science at Caltech and their collaborators from the University of Arizona have developed a device that lets planes fly with much smaller tails, reducing the planes' overall size and weight, thus increasing fuel efficiency.

On October 8, the researchers—including Emilio Graff, research project manager in aerospace at Caltech and a leader on the project—were presented with a NASA Group Achievement Award "for exceptional achievement executing a full-scale wind-tunnel test, proving the flight feasibility of active flow control."

An airplane's tail forms a critical part of the control system that helps steer the plane during flying. During flight, air rushes around the vertical tail with great force and is deflected by the tail's rudder—a moveable flap at the rear of the tail that can steer the plane by angling air to the left or right. By moving the rudder left or right, a pilot can move the air in one direction or the other, helping to keep the plane flying straight during a strong crosswind.

During the high speeds of flight, the air flow around the tail is so strong that the rudder can control the plane's path with minimal movement. However, during the lower speeds of takeoff and landing, larger rudder deflections are required to maneuver the plane. And in the case of engine failure in a multiengine airplane, the vertical tail must generate enough force to keep the plane going straight by turning "against" the working engine. Airplane manufacturers deal with this challenge by fitting planes with very large vertical tails that can deflect enough air and generate enough force to control the plane—even at low speeds.

"But this means that the planes have a tail that's too big 99 percent of the time," says Emilio Graff, research project manager in aerospace at Caltech and a leader on the project, "because you only need a tail that big if you lose an engine during takeoff or landing. Imagine if the only way you could have airbags in your car was to tow them in a big trailer behind your car, just in case there was an accident. It ends up sucking up a lot of fuel."

The system—designed by Graff and his colleagues in the laboratory of Mory Gharib, Hans W. Liepmann Professor of Aeronautics and Bioinspired Engineering—would allow airplanes to be designed with smaller tails by helping to increase the tail's steering effect at low speeds. The work was done in collaboration with Israel Wygnanski, a professor at the University of Arizona.

In their new approach, the researchers installed air-blowing devices called sweeping jet actuators under the outer skin of the tail along the tail's vertical length. The sweeping jet actuators deliver a strong, steady burst of sweeping air just along the rudder, equivalent to the amount of airflow that would normally be encountered by the tail and rudder at higher speeds. The engineers hypothesized that with the sweeping jets turned on, a smaller tail and rudder could straighten the path of the airplane, even at low speeds.

Graff says that, using these devices, airplane manufacturers could reduce the size of airplane tails by 20 percent, only needing to activate the sweeping jet actuators during the low speeds of takeoff and landing. "That means that most of the time when you're flying around normally, you're saving gas because you have a smaller, lighter tail. So even if this system itself uses a lot of energy, it's only on in emergencies," he says. "When you take off or land, the air jets will be on—just in case an engine fails. But on a 12-hour flight, if you're only using the system for 30 minutes, you're still saving gas during 11 hours and 30 minutes."

The fuel savings come not only from reduced drag due to the smaller size, but also from weight savings and structural advantages from having a shorter tail, Graff adds.

The researchers first tested this hypothesis in the approximately five-by-six-foot Lucas Wind Tunnel at Caltech, recording the effect of sweeping jet actuators on a small model—only 15 percent of the size of an actual airplane tail. Because the jets of air created by the device move back and forth, "sweeping" the air over the length of the tail rather than blasting a single, linear burst of air, the researchers discovered that they could increase air flow over the entire tail with just six of the sweeping jets. On the small-scale model, these six jets boosted the effectiveness of the rudder by over 20 percent.

Upon seeing the favorable results from this preliminary experiment, and as part of NASA's Environmentally Responsible Aviation program, Graff and his colleagues designed the system to test the effects of sweeping jet actuators on a full-sized airliner tail. However, since such tails are nearly 27 feet tall, the engineers had to move this stage of their experiment off campus, to the National Full-Scale Aerodynamics Complex at Moffett Field, California—home of the world's two largest wind tunnels.

After machining sized-up sweeping jet actuators at Caltech, the multi-institutional team, which also included engineers from Boeing Research and Technology and NASA's Langley Research Center, installed the devices on a refurbished Boeing 757 tail, found at an airplane parts salvage yard. The large wind tunnel allowed the researchers to simulate wind conditions that realistically would be experienced during takeoff and landing. Data from the full-scale test confirmed that sweeping jet actuators could sufficiently increase the air flow around the rudder to steer the plane in the event of an engine failure.

The technique used by sweeping jet actuators—called flow control—is not new; it has previously been used for quick takeoffs and landings in military applications, Graff says. But those existing systems are not energy-efficient, he adds, "and if you need a third engine to power the system, then you may as well use it to fly the plane." The system designed by Graff and his colleagues is small and efficient enough to be powered by an airliner's auxiliary power unit—the engine that powers the cabin's air conditioning and lights at the gate. "We were able to prove that a system like this can work at the scale of a commercial airliner, without having to add an extra engine," Graff says.

For the next phase of the project, collaborators at Boeing will test the sweeping jet actuators on their Boeing ecoDemonstrator 757, a plane used for testing innovations that could improve the environmental performance of their aircraft.

These findings could one day help Boeing and other manufacturers produce "greener" planes. However, Graff notes, there are still kinks to work out—for example, as currently designed, the sweeping jets could be noisy for passengers—and the adoption of any new features on an aircraft can be a lengthy process. But once adopted, the payoffs could be huge—and improving the tail is not the only goal, Graff says.

"This is only the beginning. The tail is a 'low risk' surface; modifying it puts engineers at ease compared to, for example, modifying wings," he says. "But the data shows that similar systems could be applied to wings to increase the cruise speed of airplanes and allow some maneuvers to be achieved without moving parts.

"I would be surprised if this ends up in the next line of airplanes—since the new planes are already probably years into the design stage—but some version of this device could be adopted in the near future," he says. And the researchers estimate that if all commercial airplanes were fitted with this device and used it for one year, the fuel savings would be the equivalent of taking a year's worth of traffic off of Southern California's notoriously crowded 405 freeway—a worthy goal.

The sweeping jet actuator was developed as part of NASA's Environmentally Responsible Aviation (ERA) project, which aims to reduce the impact of aviation on the environment.

Contact: 
Writer: 
Exclude from News Hub: 
No

Getting To Know Super-Earths

"If you have a coin and flip it just once, what does that tell you about the odds of heads versus tails?" asks Heather Knutson, assistant professor of planetary science at Caltech. "It tells you almost nothing. It's the same with planetary systems," she says.

For as long as astronomers have been looking to the skies, we have had just one planetary system—our own—to study in depth. That means we have only gotten to know a handful of possible outcomes of the planet formation process, and we cannot say much about whether the features observed in our solar system are common or rare when compared to planetary systems orbiting other stars.

That is beginning to change. NASA's Kepler spacecraft, which launched on a planet-hunting mission in 2009, searched one small patch of the sky and identified more than 4,000 candidate exoplanets—worlds orbiting stars other than our own sun. It was the first survey to provide a definitive look at the relative frequency of planets as a function of size. That is, to ask, 'How common are gas giant planets, like Jupiter, compared to planets that look a lot more like Earth?'

Kepler's results suggest that small planets are much more common than big ones. Interestingly, the most common planets are those that are just a bit larger than Earth but smaller than Neptune—the so-called super-Earths.

However, despite being common in our local corner of the galaxy, there are no examples of super-Earths in our own solar system. Our current observations tell us something about the sizes and orbits of these newly discovered worlds, but we have very little insight into their compositions.

"We are left with this situation where super-Earths appear to be the most common kind of exoplanet in the galaxy, but we don't know what they're made of," says Knutson.

There are a number of possibilities. A super-Earth could be just that: a bigger version of Earth—mostly rocky, with an atmosphere. Then again, it could be a mini-Neptune, with a large rock-ice core encapsulated in a thick envelope of hydrogen and helium. Or it could be a water world—a rocky core enveloped in a blanket of water and perhaps an atmosphere composed of steam (depending on the temperature of the planet).

"It's really interesting to think about these planets because they could have so many different compositions, and knowing their composition will tell us a lot about how planets form," Knutson says. For example, because planets in this size range acquire most of their mass by pulling in and incorporating solid material, water worlds initially must have formed far away from their parent stars, where temperatures were cold enough for water to freeze. Most of the super-Earths known today orbit very close to their host stars. If water-dominated super-Earths turn out to be common, it would indicate that most of these worlds did not form in their present locations but instead migrated in from more distant orbits.

In addition to thinking about exoplanets, Knutson and her students use space-based observatories like the Hubble and Spitzer Space Telescopes to learn more about the distant worlds. For example, the researchers analyze the starlight that filters through a planet's atmosphere as it passes in front of its star to learn about the composition of the atmosphere. Molecular species present in the planet's atmosphere absorb light at particular wavelengths. Therefore, by using Hubble and Spitzer to view the planet and its atmosphere at a number of different wavelengths, the researchers can determine which chemical compounds are present.

To date, nearly two dozen planets have been characterized with this technique. These observations have shown that the enormous gas giant exoplanets known as hot-Jupiters have water, carbon monoxide, hydrogen, helium—and potentially carbon dioxide and methane—in their atmospheres.

However, right now super-Earths are the hot topic. Unfortunately, although hundreds of super-Earths have been found, only a few are close enough and orbiting bright enough stars for astronomers to study in this way using currently available telescopes.

The first super-Earth that the astronomical community targeted for atmospheric studies was GJ 1214b, in the constellation Ophiuchus. Based on its average density (determined from its mass and radius), it was clear from the start that the planet was not entirely rocky. However, its density could be equally well matched by either a primarily water composition or a Neptune-like composition with a rocky core surrounded by a thick gas envelope. Information about the atmosphere could help astronomers determine which one it was: a mini-Neptune's atmosphere should contain lots of molecular hydrogen, while a water world's atmosphere should be water dominated.

GJ 1214b has been a popular target for the Hubble Space Telescope since its discovery in 2009. Disappointingly, after a first Hubble campaign led by researchers at the Harvard-Smithsonian Center for Astrophysics, the spectrum came back featureless—there were no chemical signatures in the atmosphere. After a second set of more sensitive observations led by researchers at the University of Chicago returned the same result, it became clear that a high cloud deck must be masking the signature of absorption from the planet's atmosphere.

"It's exciting to know that there are clouds on the planet, but the clouds are getting in the way of what we actually wanted to know, which is what is this super-Earth made of?" explains Knutson.

Now Knutson's team has studied a second super-Earth: HD 97658b, in the constellation Leo. They report their findings in the current issue of The Astrophysical Journal. The researchers used Hubble to measure the decrease in light when the planet passed in front of its parent star over a range of infrared wavelengths in order to detect small changes caused by water vapor in the planet's atmosphere.

However, again the data came back featureless. One explanation is that HD 97658b is also enveloped in clouds. However, Knutson says, it is also possible that the planet has an atmosphere that is lacking hydrogen. Because such an atmosphere could be very compact, it would make the telltale fingerprints of water vapor and other molecules very small and hard to detect. "Our data are not precise enough to tell whether it's clouds or the absence of hydrogen in the atmosphere that's causing the spectrum to be flat," she says. "This was just a quick first look to give us a rough idea of what the atmosphere looked like. Over the next year, we will use Hubble to observe this planet again in more detail. We hope those observations will provide a clear answer to the current mystery."

It appears that clouds are going to continue to pose a real challenge in studies of super-Earths, so Knutson and other researchers are working to understand the composition of the clouds around these planets and the conditions under which they form. The hope is that they will get to the point where they can predict which worlds will be shrouded in clouds. "If we can then target planets that we think should be cloud-free, that will help us make optimal use of Hubble's time," she says.

Looking to the future, Knutson says there is only one more known super-Earth that can be targeted for atmospheric studies with current telescopes. But new surveys, such as NASA's extended Kepler K2 mission and the Transiting Exoplanet Survey Satellite (TESS), slated for launch in 2017, should identify a large sample of new targets.

Of course, she says, astronomers would love to study exoplanets the size of Earth, but these worlds are just a bit too small and too difficult to observe with Hubble and Spitzer. NASA's James Webb Space Telescope, which is scheduled for launch in 2018, will provide the first opportunity to study more Earth-like worlds. "Super-Earths are at the edge of what we can study right now," Knutson says. "But super-Earths are a good consolation prize—they're interesting in their own right, and they give us a chance to explore new kinds of worlds with no analog in our own solar system."

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Rock-Dwelling Microbes Remove Methane from Deep Sea

Methane-breathing microbes that inhabit rocky mounds on the seafloor could be preventing large volumes of the potent greenhouse gas from entering the oceans and reaching the atmosphere, according to a new study by Caltech researchers.

The rock-dwelling microbes, which are detailed in the Oct. 14 issue of Nature Communications, represent a previously unrecognized biological sink for methane and as a result could reshape scientists' understanding of where this greenhouse gas is being consumed in subseafloor habitats, says Professor of Geobiology Victoria Orphan, who led the study.

"Methane is a much more powerful greenhouse gas than carbon dioxide, so tracing its flow through the environment is really a priority for climate models and for understanding the carbon cycle," Orphan says.

Orphan's team has been studying methane-breathing marine microorganisms for nearly 20 years. The microbes they focus on survive without oxygen, relying instead on sulfate ions present in seawater for their energy needs. Previous work by Orphan's team helped show that the methane-breathing system is actually made up of two different kinds of microorganisms that work closely with one another. One of the partners, dubbed "ANME" for "ANaerobic MEthanotrophs," belongs to a type of ancient single-celled creatures called the archaea.

Through a mechanism that is still unclear, ANME work closely with bacteria to consume methane using sulfate from seawater. "Without this biological process, much of that methane would enter the water column, and the escape rates into the atmosphere would probably be quite a bit higher," says study first author Jeffrey Marlow, a geobiology graduate student in Orphan's lab.

Until now, however, the activity of ANME and their bacterial partners had been primarily studied in sediments located in cold seeps, areas on the ocean bottom where methane is escaping from subseafloor sources into the water above. The new study marks the first time they have been observed to oxidize methane inside carbonate mounds, huge rocky outcroppings of calcium carbonate that can rise hundreds of feet above the seafloor.

If the microbes are living inside the mounds themselves, then the distribution of methane consumption is significantly different from what was previously thought. "Methane-derived carbonates represent a large volume within many seep systems, and finding active methane-consuming archaea and bacteria in the interior of these carbonate rocks extends the known habitat for methane-consuming microorganisms beyond the relatively thin layer of sediment that may overlay a carbonate mound," Marlow says.

Orphan and her team detected evidence of methane-breathing microbes in carbonate rocks collected from three cold seeps around the world: one at a tectonic plate boundary near Costa Rica; another in the Eel River basin off the coast of northwestern California; and at Hydrate Ridge, off the Oregon coast. The team used manned and robotic submersibles to collect the rock samples from depths ranging from 2,000 feet to nearly half a mile below the surface.

Marlow has vivid memories of being a passenger in the submersible Alvin during one of those rock-retrieval missions. "As you sink down, the water outside your window goes from bright blue surface water to darker turquoise and navy blue and all these shades of blue that you didn't know existed until it gets completely dark," Marlow recalls. "And then you start seeing flashes of light because the vehicle is perturbing the water column and exciting florescent organisms. When you finally get to the seafloor, Alvin's exterior lights turn on, and this crazy alien world is illuminated in front of you."

The carbonate mounds that the subs visited often serve as foundations for coral and sponges, and are home to rockfishes, clams, crabs, and other aquatic life. For their study, the team members gathered rock samples not only from carbonate mounds located within active cold seeps, where methane could be seen escaping from the seafloor into the water, but also from mounds that appeared to be dormant.

Once the carbonate rocks were collected, they were transported back to the surface and rushed into a cold room aboard a research ship. In the cold room, which was maintained at the temperature of the deep sea, the team cracked open the carbonates in order to gather material from their interiors. "We wanted to make sure we weren't just sampling material from the surface of the rocks," Marlow says.

Using a microscope, the team confirmed that ANME and sulfate-reducing bacterial cells were indeed present inside the carbonate rocks, and genetic analysis of their DNA showed that they were related to methanotrophs that had previously been characterized in seafloor sediment. The scientists also used a technique that involved radiolabeled 14C-methane tracer gas to quantify the rates of methane consumption in the carbonate rocks and sediments from both the actively seeping sites and the areas appearing to be inactive. They found that the rock-dwelling methanotrophs consumed methane at a slower rate than their sediment-dwelling cousins.

"The carbonate-based microbes breathed methane at roughly one-third the rate of those gathered from sediments near active seep sites," Marlow says. "However, because there are likely many more microbes living in carbonate mounds than in sediments, their contributions to methane removal from the environment may be more significant."

The rock samples that were harvested near supposedly dormant cold seeps also harbored microbial communities capable of consuming methane. "We were surprised to find that these marine microorganisms are still viable and, if exposed to methane, can continue to oxidize this greenhouse gas long after surface expressions of seepage have vanished." Orphan says.

Along with Orphan and Marlow, additional coauthors on the paper, "Carbonate-hosted methanotrophy represents an unrecognized methane sink in the deep sea," include former Caltech associate research scientist Joshua Steele, now at the Southern California Coastal Water Research Project; Wiebke Ziebis, an associate professor at the University of Southern California; Andrew Thurber, an assistant professor at Oregon State University; and Lisa Levin, a professor at the Scripps Institution of Oceanography. Funding for the study was provided by the National Science Foundation; NASA's Astrobiology Institute; the Gordon and Betty Moore Foundation Marine Microbiology Initiative grant; and the National Research Council of the National Academies. 

Written by Ker Than

Writer: 
Ker Than
Exclude from News Hub: 
No

Pages

Subscribe to RSS - research_news