Tuesday, December 2, 2014
Guggenheim 101 (Lees-Kubota Lecture Hall)

PUSD: Annual Open Enrollment

Caltech Geologists Discover Ancient Buried Canyon in South Tibet

A team of researchers from Caltech and the China Earthquake Administration has discovered an ancient, deep canyon buried along the Yarlung Tsangpo River in south Tibet, north of the eastern end of the Himalayas. The geologists say that the ancient canyon—thousands of feet deep in places—effectively rules out a popular model used to explain how the massive and picturesque gorges of the Himalayas became so steep, so fast.

"I was extremely surprised when my colleagues, Jing Liu-Zeng and Dirk Scherler, showed me the evidence for this canyon in southern Tibet," says Jean-Philippe Avouac, the Earle C. Anthony Professor of Geology at Caltech. "When I first saw the data, I said, 'Wow!' It was amazing to see that the river once cut quite deeply into the Tibetan Plateau because it does not today. That was a big discovery, in my opinion." 

Geologists like Avouac and his colleagues, who are interested in tectonics—the study of the earth's surface and the way it changes—can use tools such as GPS and seismology to study crustal deformation that is taking place today. But if they are interested in studying changes that occurred millions of years ago, such tools are not useful because the activity has already happened. In those cases, rivers become a main source of information because they leave behind geomorphic signatures that geologists can interrogate to learn about the way those rivers once interacted with the land—helping them to pin down when the land changed and by how much, for example.

"In tectonics, we are always trying to use rivers to say something about uplift," Avouac says. "In this case, we used a paleocanyon that was carved by a river. It's a nice example where by recovering the geometry of the bottom of the canyon, we were able to say how much the range has moved up and when it started moving."

The team reports its findings in the current issue of Science.

Last year, civil engineers from the China Earthquake Administration collected cores by drilling into the valley floor at five locations along the Yarlung Tsangpo River. Shortly after, former Caltech graduate student Jing Liu-Zeng, who now works for that administration, returned to Caltech as a visiting associate and shared the core data with Avouac and Dirk Scherler, then a postdoc in Avouac's group. Scherler had previously worked in the far western Himalayas, where the Indus River has cut deeply into the Tibetan Plateau, and immediately recognized that the new data suggested the presence of a paleocanyon.

Liu-Zeng and Scherler analyzed the core data and found that at several locations there were sedimentary conglomerates, rounded gravel and larger rocks cemented together, that are associated with flowing rivers, until a depth of 800 meters or so, at which point the record clearly indicated bedrock. This suggested that the river once carved deeply into the plateau.

To establish when the river switched from incising bedrock to depositing sediments, they measured two isotopes, beryllium-10 and aluminum-26, in the lowest sediment layer. The isotopes are produced when rocks and sediment are exposed to cosmic rays at the surface and decay at different rates once buried, and so allowed the geologists to determine that the paleocanyon started to fill with sediment about 2.5 million years ago.

The researchers' reconstruction of the former valley floor showed that the slope of the river once increased gradually from the Gangetic Plain to the Tibetan Plateau, with no sudden changes, or knickpoints. Today, the river, like most others in the area, has a steep knickpoint where it meets the Himalayas, at a place known as the Namche Barwa massif. There, the uplift of the mountains is extremely rapid (on the order of 1 centimeter per year, whereas in other areas 5 millimeters per year is more typical) and the river drops by 2 kilometers in elevation as it flows through the famous Tsangpo Gorge, known by some as the Yarlung Tsangpo Grand Canyon because it is so deep and long.

Combining the depth and age of the paleocanyon with the geometry of the valley, the geologists surmised that the river existed in this location prior to about 3 million years ago, but at that time, it was not affected by the Himalayas. However, as the Indian and Eurasian plates continued to collide and the mountain range pushed northward, it began impinging on the river. Suddenly, about 2.5 million years ago, a rapidly uplifting section of the mountain range got in the river's way, damming it, and the canyon subsequently filled with sediment.

"This is the time when the Namche Barwa massif started to rise, and the gorge developed," says Scherler, one of two lead authors on the paper and now at the GFZ German Research Center for Geosciences in Potsdam, Germany.

That picture of the river and the Tibetan Plateau, which involves the river incising deeply into the plateau millions of years ago, differs quite a bit from the typically accepted geologic vision. Typically, geologists believe that when rivers start to incise into a plateau, they eat at the edges, slowly making their way into the plateau over time. However, the rivers flowing across the Himalayas all have strong knickpoints and have not incised much at all into the Tibetan Plateau. Therefore, the thought has been that the rapid uplift of the Himalayas has pushed the rivers back, effectively pinning them, so that they have not been able to make their way into the plateau. But that explanation does not work with the newly discovered paleocanyon.

The team's new hypothesis also rules out a model that has been around for about 15 years, called tectonic aneurysm, which suggests that the rapid uplift seen at the Namche Barwa massif was triggered by intense river incision. In tectonic aneurysm, a river cuts down through the earth's crust so fast that it causes the crust to heat up, making a nearby mountain range weaker and facilitating uplift.

The model is popular among geologists, and indeed Avouac himself published a modeling paper in 1996 that showed the viability of the mechanism. "But now we have discovered that the river was able to cut into the plateau way before the uplift happened," Avouac says, "and this shows that the tectonic aneurysm model was actually not at work here. The rapid uplift is not a response to river incision."

The other lead author on the paper, "Tectonic control of Yarlung Tsangpo Gorge revealed by a buried canyon in Southern Tibet," is Ping Wang of the State Key Laboratory of Earthquake Dynamics, in Beijing, China. Additional authors include Jürgen Mey, of the University of Potsdam, in Germany; and Yunda Zhang and Dingguo Shi of the Chengdu Engineering Corporation, in China. The work was supported by the National Natural Science Foundation of China, the State Key Laboratory for Earthquake Dynamics, and the Alexander von Humboldt Foundation. 

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

New Center Supports Data-Driven Research

With the advanced capabilities of today's computer technologies, researchers can now collect vast amounts of information with unprecedented speed. However, gathering information is only one half of a scientific discovery, as the data also need to be analyzed and interpreted. A new center on campus aims to hasten such data-driven discoveries by making expertise and advanced computational tools available to Caltech researchers in many disciplines within the sciences and the humanities.

The new Center for Data-Driven Discovery (CD3), which became operational this fall, is a hub for researchers to apply advanced data exploration and analysis tools to their work in fields such as biology, environmental science, physics, astronomy, chemistry, engineering, and the humanities.

The Caltech center will also complement the resources available at JPL's Center for Data Science and Technology, says director of CD3 and professor of astronomy George Djorgovski.

"Bringing together the research, technical expertise, and respective disciplines of the two centers to form this joint initiative creates a wonderful synergy that will allow us opportunities to explore and innovate new capabilities in data-driven science for many of our sponsors," adds Daniel Crichton, director of the Center for Data Science and Technology at JPL.

At the core of the Caltech center are staff members who specialize in both computational methodology and various domains of science, such as biology, chemistry, and physics. Faculty-led research groups from each of Caltech's six divisions and JPL will be able to collaborate with center staff to find new ways to get the most from their research data. Resources at CD3 will range from data storage and cataloguing that meet the highest "housekeeping" standards, to custom data-analysis methods that combine statistics with machine learning—the development of algorithms that can "learn" from data. The staff will also help develop new research projects that could benefit from large amounts of existing data.

"The volume, quality, and complexity of data are growing such that the tools that we used to use—on our desktops or even on serious computing machines—10 years ago are no longer adequate. These are not problems that can be solved by just buying a bigger computer or better software; we need to actually invent new methods that allow us to make discoveries from these data sets," says Djorgovski.

Rather than turning to off-the-shelf data-analysis methods, Caltech researchers can now collaborate with CD3 staff to develop new customized computational methods and tools that are specialized for their unique goals. For example, astronomers like Djorgovski can use data-driven computing in the development of new ways to quickly scan large digital sky surveys for rare or interesting targets, such as distant quasars or new kinds of supernova explosions—targets that can be examined more closely with telescopes, such as those at the W. M. Keck Observatory, he says.

Mary Kennedy, the Allen and Lenabelle Davis Professor of Biology and a coleader of CD3, says that the center will serve as a bridge between the laboratory-science and computer-science communities at Caltech. In addition to matching up Caltech faculty members with the expertise they will need to analyze their data, the center will also minimize the gap between those communities by providing educational opportunities for undergraduate and graduate students.

"Scientific development has moved so quickly that the education of most experimental scientists has not included the techniques one needs to synthesize or mine large data sets efficiently," Kennedy says. "Another way to say this is that 'domain' sciences—biology, engineering, astronomy, geology, chemistry, sociology, etc.—have developed in isolation from theoretical computer science and mathematics aimed at analysis of high-dimensional data. The goal of the new center is to provide a link between the two."

Work in Kennedy's laboratory focuses on understanding what takes place at the molecular level in the brain when neuronal synapses are altered to store information during learning. She says that methods and tools developed at the new center will assist her group in creating computer simulations that can help them understand how synapses are regulated by enzymes during learning.

"The ability to simulate molecular mechanisms in detail and then test predictions of the simulations with experiments will revolutionize our understanding of highly interconnected control mechanisms in cells," she says. "To some, this seems like science fiction, but it won't stay fictional for long. Caltech needs to lead in these endeavors."

Assistant Professor of Biology Mitchell Guttman says that the center will also be an asset to groups like his that are trying to make sense out of big sets of genomic data. "Biology is becoming a big-data science—genome sequences are available at an unprecedented pace. Whereas it took more than $1 billion to sequence the first genome, it now costs less than $1,000," he says. "Making sense of all this data is a challenge, but it is the future of biomedical research."

In his own work, Guttman studies the genetic code of lncRNAs, a new class of gene that he discovered, largely through computational methods like those available at the new center. "I am excited about the new CD3 center because it represents an opportunity to leverage the best ideas and approaches across disciplines to solve a major challenge in our own research," he says.

But the most valuable findings from the center could be those that stem not from a single project, but from the multidisciplinary collaborations that CD3 will enable, Djorgovski says. "To me, the most interesting outcome is to have successful methodology transfers between different fields—for example, to see if a solution developed in astronomy can be used in biology," he says.

In fact, one such crossover method has already been identified, says Matthew Graham, a computational scientist at the center. "One of the challenges in data-rich science is dealing with very heterogeneous data—data of different types from different instruments," says Graham. "Using the experience and the methods we developed in astronomy for the Virtual Observatory, I worked with biologists to develop a smart data-management system for a collection of expression and gene-integration data for genetic lines in zebrafish. We are now starting a project along similar methodology transfer lines with Professor Barbara Wold's group on RNA genomics."

And, through the discovery of more tools and methods like these, "the center could really develop new projects that bridge the boundaries between different traditional fields through new collaborations," Djorgovski says.

Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Photosynthesis: A Planetary Revolution

Watson Lecture Preview

Two and a half billion years ago, single-celled organisms called cyanobacteria harnessed sunlight to split water molecules, producing energy to power their cells and releasing oxygen into an atmosphere that had previously had none. These early environmental engineers are responsible for the life we see around us today, and much more besides.

At 8:00 p.m. on Wednesday, November 19, in Caltech's Beckman Auditorium, Professor of Geobiology Woodward "Woody" Fischer will describe how they transformed the planet. Admission is free.

 

Q: What do you do?

A: I'm a geobiologist of the historical variety. I'm trying to understand both how the earth works, and why it works that way. The whys are hard, because you can't redo this planetary experiment. You have to create clever ways to work backward from what you can observe to answer the question you've posed.

When you boil down the earth's history, there are maybe a half-dozen singularities—fundamental changes in how our planet and the life on it interact. Photosynthetic cyanobacteria reengineered the planet. Photosynthesis led to two more singularities—plants and animals appeared. The remaining singularities are mass extinctions as a result of something happening to the global environment, and photosynthesis likely caused one of those as well. Oxygen can be highly toxic because it's so reactive. It chews up your DNA, and it binds to the metal compounds that cells use to shuttle electrons around. Any microbes that couldn't cope with this new pollutant died off, or were forced to hide in oxygen-depleted environments.

Atmospheric oxygen resulted from a change to a microbe's metabolism that evolved once, at a specific time in the earth's history. We want to know why that happened. What were those bacteria doing beforehand? What forced them to develop this radically new way of making a living?

Bacteria don't leave fossils, per se, but they can leave behind metabolic signatures that sedimentary rocks preserve. They impact the rock's elemental composition, and they alter the ratios between heavier and lighter isotopes of certain elements as well. We can work backward from that information to deduce what the bacteria were doing on the ocean floor and in the seawater above it as those sediments were being laid down.

 

Q: If the earth has had breathable oxygen for billions of years, why should we care where it came from?

A: There are two really good reasons.

One has to do with meeting society's energy demands. There's a tremendous effort at Caltech and elsewhere to develop "solar fuels." Can we do better than green plants? If cyanobacteria did the best they could under tight constraints, maybe not. But if there are a variety of ways to do that chemistry, maybe we can clear the slate and do something entirely different.

The deeper reason is that atmospheric oxygen rewrote life's recipe book. Oxygen-based metabolism provides extra energy that can be invested in cellular specialization. A group of specialized cells can become a tissue, and eventually you have complex creatures with limbs. It's like agriculture—when you start growing crops, you have surplus food. Villages spring up. Craftsmen appear.

It gets to the Big Question—how rare are we? The earth is 4.5 billion years old, and the oldest evidence for life is about 3.5 billion years old. It took another billion years until photosynthesis, and two billion more for animals to develop. Is it possible to evolve advanced creatures under a different set of constraints leading to completely different metabolisms? If we're looking for life on worlds that play by different rules, will we recognize it?

 

Q: How did you get into this line of work?

A: As a small kid, I always loved science. That disappeared somewhere in middle school, so I went to Colorado College in Colorado Springs—a small, liberal-arts school with a really intense curriculum called the block plan. You take one class at a time for a month. You're completely immersed—lecture from nine to twelve, break for lunch, afternoon labs, evening homework. Lather, rinse, repeat. I took a geology class on a whim, because my grandfather had once taught paleontology there. The class vanished into the mountains for a month, and I was hooked.

In graduate school at Harvard, I worked with Andy Knoll, a Precambrian paleontologist who's trying to understand what the world looked like before animals. Andy's primary appointment is actually in the biology department, and I built on my sedimentary-geology background with a lot of biology classes—molecular biology, biochemistry, genomics, comparative biology, evolutionary biology. And then I came here as an Agouron Postdoctoral Scholar in Geobiology in 2007. I was fortunate that they invited me to stay.

 


Named for the late Caltech professor Earnest C. Watson, who founded the series in 1922, the Watson Lectures present Caltech and JPL researchers describing their work to the public. Many past Watson Lectures are available online at Caltech's iTunes U site.
Writer: 
Douglas Smith
Listing Title: 
Watson Lecture: "Photosynthesis: A Planetary Revolution"
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community

Robotic Ocean Gliders Aid Study of Melting Polar Ice

The rapidly melting ice sheets on the coast of West Antarctica are a potential major contributor to rising ocean levels worldwide. Although warm water near the coast is thought to be the main factor causing the ice to melt, the process by which this water ends up near the cold continent is not well understood.

Using robotic ocean gliders, Caltech researchers have now found that swirling ocean eddies, similar to atmospheric storms, play an important role in transporting these warm waters to the Antarctic coast—a discovery that will help the scientific community determine how rapidly the ice is melting and, as a result, how quickly ocean levels will rise.

Their findings were published online on November 10 in the journal Nature Geoscience.

"When you have a melting slab of ice, it can either melt from above because the atmosphere is getting warmer or it can melt from below because the ocean is warm," explains lead author Andrew Thompson, assistant professor of environmental science and engineering. "All of our evidence points to ocean warming as the most important factor affecting these ice shelves, so we wanted to understand the physics of how the heat gets there."

Ordinarily when oceanographers like Thompson want to investigate such questions, they use ships to lower instruments through the water or they collect ocean temperature data from above with satellites. These techniques are problematic in the Southern Ocean. "Observationally, it's a very hard place to get to with ships. Also, the warm water is not at the surface, making satellite observations ineffective," he says.

Because the gliders are small—only about six feet long—and are very energy efficient, they can sample the ocean for much longer periods than large ships can. When the glider surfaces every few hours, it "calls" the researchers via a mobile phone–like device located on the tail. This communication allows the researchers to almost immediately access the information the glider has collected.

Like airborne gliders, the bullet-shaped ocean gliders have no propeller; instead they use batteries to power a pump that changes the glider's buoyancy. When the pump pushes fluid into a compartment inside the glider, the glider becomes denser than seawater and less buoyant, thus causing it to sink. If the fluid is pumped instead into a bladder on the outside of the glider, the glider becomes less dense than seawater—and therefore more buoyant—ultimately rising to the surface. Like airborne gliders, wings convert this vertical lift into horizontal motion.

Thompson and his colleagues from the University of East Anglia dropped their gliders into the ocean off the coast of the Antarctic Peninsula in January 2012; the robotic vehicles then spent the next two months moving up and down through the water column—diving a kilometer below the surface of the water and back up again every few hours—exploring the Weddell Sea off the coast of Antarctica. As the gliders traveled, they collected temperature and salinity data at different locations and depths of the sea.

The glider's up and down capability is important for studying ocean stratification, or how water characteristics, such as density, change with depth, Thompson says. "If it was only temperature that determined density, you'd always have warm water at the top and cold water at the bottom. But in the ocean you also have to factor in salinity; the higher the salinity is in the water, the more dense that water is and the more likely it is to sink to the bottom," he says.

In Antarctica the combined effects of temperature and salinity create an interesting situation, in which the warmest water is not on top, but actually sandwiched in the middle layers of the water column. "That's an additional problem in understanding the heat transport in this region," he adds. You can't just take measurements at the surface, he says. "You actually need to be taking a look at that very warm temperature layer, which happens to sit in the middle of the water column. That's the layer that is actually moving toward the ice shelf."

The results from the gliders revealed that the heat was actually coming from a less predictable source: eddies, swirling underwater storms that are caused by ocean currents.

"Eddies are instabilities that are caused by ocean currents, and we often compare their effect on the ocean to putting a spoon in your coffee," Thompson says. "If you pour milk in your coffee and then you stir it with a spoon, the spoon enhances your ability to mix the milk into the coffee and that is what these eddies do. They are very good at mixing heat and other properties."

Because the gliders could dive and surface every few hours and remain at sea for months, they were able to see these eddies in action—something that ships and satellites had previously been unable to capture.

"Ocean currents are variable, and so if you go just one time, what you measure might not be what the current looks like a day later. It's sort of like the weather—you know it's going to be warm in the summer and cold in the winter, but on a day-to-day basis it could be cold in the summer just because a storm came in," Thompson says. "Eddies do the same thing in the ocean, so unless you understand how the temperature of currents is changing from day to day—information we can actually collect with the gliders—then you can't understand what the long-term heat transport is."

In future work, Thompson plans to couple meteorological data with the data collected from his gliders. In December, the team will use ocean gliders to study a rough patch of ocean between the southern tip of South America and Antarctica, called the Drake Passage, as a surface robot, called a Waveglider, collects information from the surface of the water. "With the Waveglider, we can measure not just the ocean properties, but atmospheric properties as well, such as wind speed and wind direction. So we'll get to actually see what's happening at the air-sea interface."

In the Drake Passage, deep waters from the Southern Ocean are "ventilated"—or emerge at the surface—a phenomenon specific to this region of the ocean. That makes the location important for understanding the exchange of carbon dioxide between the atmosphere and the ocean. "The Southern Ocean is the window through which deep waters can actually come up to 'see' the atmosphere"—and it's also a window for oceanographers to more easily see the deep ocean, he says. "It's a very special place for many reasons."

The work with ocean gliders was published in a paper titled "Eddy transport as a key component of the Antarctic overturning circulation." Other authors on the paper include Karen J. Heywood of the University of East Anglia, Sunke Schmidtko of GEOMAR Helmholtz Centre for Ocean Research Kiel, Germany, and Andrew Stewart, a former postdoctoral scholar at Caltech who is now at UCLA. Thompson's glider work was supported by an award from the National Science Foundation and the UK's Natural Environment Research Council; Stewart was supported by the President's and Director's Fund program at Caltech.

Writer: 
Exclude from News Hub: 
No

Unexpected Findings Change the Picture of Sulfur on the Early Earth

Scientists believe that until about 2.4 billion years ago there was little oxygen in the atmosphere—an idea that has important implications for the evolution of life on Earth. Evidence in support of this hypothesis comes from studies of sulfur isotopes preserved in the rock record. But the sulfur isotope story has been uncertain because of the lack of key information that has now been provided by a new analytical technique developed by a team of Caltech geologists and geochemists. The story that new information reveals, however, is not what most scientists had expected.

"Our new technique is 1,000 times more sensitive for making sulfur isotope measurements," says Jess Adkins, professor of geochemistry and global environmental science at Caltech. "We used it to make measurements of sulfate groups dissolved in carbonate minerals deposited in the ocean more than 2.4 billion years ago, and those measurements show that we have been thinking about this part of the sulfur cycle and sulfur isotopes incorrectly."

The team describes their results in the November 7 issue of the journal Science. The lead author on the paper is Guillaume Paris, an assistant research scientist at Caltech.

Nearly 15 years ago, a team of geochemists led by researchers at UC San Diego discovered there was something peculiar about the sulfur isotope content of rocks from the Archean era, an interval that lasted from 3.8 billion to about 2.4 billion years ago. In those ancient rocks, the geologists were analyzing the abundances of stable isotopes of sulfur.

When sulfur is involved in a reaction—such as microbial sulfate reduction, a way for microbes to eat organic compounds in the absence of oxygen—its isotopes are usually fractionated, or separated, from one another in proportion to their differences in mass. That is, 34S gets fractionated from 32S about twice as much as 33S gets fractionated from 32S. This process is called mass-dependent fractionation, and, scientists have found that it dominates in virtually all sulfur processes operating on Earth's surface for the last 2.4 billion years.

However, in older rocks from the Archean era (i.e., older than 2.4 billion years), the relative abundances of sulfur isotopes do not follow the same mass-related pattern, but instead show relative enrichments or deficiencies of 33S relative to 34S. They are said to be the product of mass-independent fractionation (MIF).

The widely accepted explanation for the occurrence of MIF is as follows. Billions of years ago, volcanism was extremely active on Earth, and all those volcanoes spewed sulfur dioxide high into the atmosphere. At that time, oxygen existed at very low levels in the atmosphere, and therefore ozone, which is produced when ultraviolet radiation strikes oxygen, was also lacking. Today, ozone prevents ultraviolet light from reaching sulfur dioxide with the energy needed to fractionate sulfur, but on the early Earth, that was not the case, and MIF is the result. Researchers have been able to reproduce this effect in the lab by shining lasers onto sulfur dioxide and producing MIF.

Geologists have also measured the sulfur isotopic composition of sedimentary rocks dating to the Archean era, and found that sulfides—sulfur-bearing compounds such as pyrite (FeS2)—include more 33S than would be expected based on normal mass-dependent processes. But if those minerals are enriched in 33S, other minerals must be correspondingly lacking in the isotope. According to the leading hypothesis, those 33S-deficient minerals should be sulfates—oxidized sulfur-bearing compounds—that were deposited in the Archean ocean.

"That idea was put forward on the basis of experiment. To test the hypothesis, you'd need to check the isotope ratios in sulfate salts (minerals such as gypsum), but those don't really exist in the Archean rock record since there was very little oxygen around," explains Woody Fischer, professor of geobiology at Caltech and a coauthor on the new paper. "But there are trace amounts of sulfate that got trapped in carbonate minerals in seawater."

However, because those sulfates are present in such small amounts, no one has been able to measure well their isotopic composition. But using a device known as a multicollector inductively-coupled mass spectrometer to precisely measure multiple sulfur isotopes, Adkins and his colleague Alex Sessions, a professor of geobiology, developed a method that is sensitive enough to measure the isotopic composition of about 10 nanomoles of sulfate in just a few tens of milligrams of carbonate material.

The authors used the method to measure the sulfate content of carbonates from an ancient carbonate platform preserved in present-day South Africa, an ancient version of the depositional environments found in the Bahamas today. Analyzing the samples, which spanned 70 million years and a variety of marine environments, the researchers found exactly the opposite of what had been predicted: the sulfates were actually enriched by 33S rather than lacking in it.

"Now, finally, we're looking at this sulfur cycle and the sulfur isotopes correctly," Adkins says.

What does this mean for the atmospheric conditions of the early Earth? "Our findings underscore that the oxygen concentrations in the early atmosphere could have been incredibly low," Fischer says.

Knowledge of sulfate isotopes changes how we understand the role of biology in the sulfur cycle, he adds. Indeed, the fact that the sulfates from this time period have the same isotopic composition as sulfide minerals suggests that the sulfides may be the product of microbial processes that reduced seawater sulfate to sulfide (which later precipitated in sediments in the form of pyrite). Previously, scientists thought that all of the isotope fractionation could be explained by inorganic processes alone.

In a second paper also in the November 7 issue of Science, Paris, Adkins, Sessions, and colleagues from a number of institutions around the world report on related work in which they measured the sulfates in Indonesia's Lake Matano, a low-sulfate analog of the Archean ocean.

At about 100 meters depth, the bacterial communities in Lake Matano begin consuming sulfate rather than oxygen, as do most microbial communities, yielding sulfide. The researchers measured the sulfur isotopes within the sulfates and sulfides in the lake water and sediments and found that despite the low concentrations of sulfate, a lot of mass-dependent fractionation was taking place. The researchers used the data to build a model of the lake's sulfur cycle that could produce the measured fractionation, and when they applied their model to constrain the range of concentrations of sulfate in the Archean ocean, they found that the concentration was likely less than 2.5 micromolar, 10,000 times lower than the modern ocean.

"At such low concentration, all the isotopic variability starts to fit," says Adkins. "With these two papers, we were able to come at the same problem in two ways—by measuring the rocks dating from the Archean and by looking at a model system today that doesn't have much sulfate—and they point toward the same answer: the sulfate concentration was very low in the Archean ocean."

Samuel M. Webb of the Stanford Synchrotron Radiation Lightsource is also an author on the paper, "Neoarchean carbonate-associated sulfate records positive Δ33S anomalies." The work was supported by funding from the National Science Foundation's Division of Earth Sciences, the Henry and Camille Dreyfus Foundation's Postdoctoral Program in Environmental Chemistry, and the David and Lucile Packard Foundation.

Paris is also a co-lead author on the second paper, "Sulfate was a trace constituent of Archean seawater." Additional authors on that paper are Sean Crowe and CarriAyne Jones of the University of British Columbia and the University of Southern Denmark; Sergei Katsev of the University of Minnesota Duluth; Sang-Tae Kim of McMaster University; Aubrey Zerkle of the University of St. Andrews; Sulung Nomosatryo of the Indonesian Institute of Sciences; David Fowle of the University of Kansas; James Farquhar of the University of Maryland, College Park; and Donald Canfield of the University of Southern Denmark. Funding was provided by an Agouron Institute Geobiology Fellowship and a Natural Sciences and Engineering Research Council of Canada Postdoctoral Fellowship, as well as by the Danish National Research Foundation and the European Research Council.

Writer: 
Exclude from News Hub: 
No
Wednesday, October 29, 2014
Center for Student Services 360 (Workshop Space)

Meet the Outreach Guys: James & Julius

Oceanographer Andrew Thompson Wins Prestigious Fellowship

Caltech oceanographer Andrew Thompson, who uses autonomous underwater instruments and numerical models to study ocean currents and eddies and their impact on Earth's ecology and climate, has been awarded a Packard Fellowship for Science and Engineering. Packard Fellowships are awarded annually by the David and Lucile Packard Foundation to the nation's "most innovative early-career scientists and engineers" to provide them with "flexible funding and the freedom to take risks and explore new frontiers in their fields," according to the foundation.

Along with this year's 17 other fellows, Thompson, an assistant professor of environmental science and engineering, will receive a grant of $875,000 distributed over five years, to pursue his research.

As is the case for other Packard Fellows, Thompson was surprised by his selection. He recalls being called on a recent Tuesday morning into a meeting with Ken Farley, W. M. Keck Foundation Professor of Geochemistry. Farley had chaired Thompson's division, Geological and Planetary Sciences, up until September 1, when Fletcher Jones Professor of Geology John Grotzinger took over.

"Ken asked to meet with me in the division chair's office. This was already a little odd, because John had already taken over, but I did not think too much of it at the time," Thompson says. "Five minutes into our conversation, the phone rang, and when it was for me, I knew that something was up. Ken had nominated me in the spring so he was the one who delivered the news. I was thrilled. I had to go for a walk immediately after to calm down. It was really an honor to represent Caltech as a nominee."

"This was a great way to complete my term as chair—to have played a part in successfully nominating Andy for this prestigious and valuable award, and to get the chance to see his surprise and happiness when the foundation told him," Farley says.

Although he's only just heard the news, Thompson already has plans for the grant. "Part of the funds will be used to support our work with autonomous ocean instruments—gliders—that allow us to observe remote or dynamic parts of the ocean over long periods of time," he says. "These tools will be used to explore the coupling between ocean circulation, ecosystem dynamics, and biogeochemical cycling in the upper ocean, processes that are difficult to observe using traditional ship-based techniques."

Thompson joins 23 other current Caltech faculty who have been named Packard Fellows since the program's inception in 1988. To date, the Packard Foundation, a private family foundation created in 1964 Hewlett-Packard Company cofounder David Packard and his wife, Lucile Packard, has awarded $346 million to support 523 scientists from 52 national universities. Each year, participating universities are invited to nominate two faculty members for consideration by the 12-member Fellowship Advisory Panel of internationally recognized scientists and engineers, which recommends nominees for approval by the Packard Foundation Board of Trustees.

Writer: 
Kathy Svitil
Frontpage Title: 
Oceanographer Awarded Prestigious Packard Fellowship
Writer: 
Exclude from News Hub: 
No
News Type: 
In Our Community
Wednesday, October 29, 2014
Avery Courtyard

Fall Family Festival

Getting To Know Super-Earths

"If you have a coin and flip it just once, what does that tell you about the odds of heads versus tails?" asks Heather Knutson, assistant professor of planetary science at Caltech. "It tells you almost nothing. It's the same with planetary systems," she says.

For as long as astronomers have been looking to the skies, we have had just one planetary system—our own—to study in depth. That means we have only gotten to know a handful of possible outcomes of the planet formation process, and we cannot say much about whether the features observed in our solar system are common or rare when compared to planetary systems orbiting other stars.

That is beginning to change. NASA's Kepler spacecraft, which launched on a planet-hunting mission in 2009, searched one small patch of the sky and identified more than 4,000 candidate exoplanets—worlds orbiting stars other than our own sun. It was the first survey to provide a definitive look at the relative frequency of planets as a function of size. That is, to ask, 'How common are gas giant planets, like Jupiter, compared to planets that look a lot more like Earth?'

Kepler's results suggest that small planets are much more common than big ones. Interestingly, the most common planets are those that are just a bit larger than Earth but smaller than Neptune—the so-called super-Earths.

However, despite being common in our local corner of the galaxy, there are no examples of super-Earths in our own solar system. Our current observations tell us something about the sizes and orbits of these newly discovered worlds, but we have very little insight into their compositions.

"We are left with this situation where super-Earths appear to be the most common kind of exoplanet in the galaxy, but we don't know what they're made of," says Knutson.

There are a number of possibilities. A super-Earth could be just that: a bigger version of Earth—mostly rocky, with an atmosphere. Then again, it could be a mini-Neptune, with a large rock-ice core encapsulated in a thick envelope of hydrogen and helium. Or it could be a water world—a rocky core enveloped in a blanket of water and perhaps an atmosphere composed of steam (depending on the temperature of the planet).

"It's really interesting to think about these planets because they could have so many different compositions, and knowing their composition will tell us a lot about how planets form," Knutson says. For example, because planets in this size range acquire most of their mass by pulling in and incorporating solid material, water worlds initially must have formed far away from their parent stars, where temperatures were cold enough for water to freeze. Most of the super-Earths known today orbit very close to their host stars. If water-dominated super-Earths turn out to be common, it would indicate that most of these worlds did not form in their present locations but instead migrated in from more distant orbits.

In addition to thinking about exoplanets, Knutson and her students use space-based observatories like the Hubble and Spitzer Space Telescopes to learn more about the distant worlds. For example, the researchers analyze the starlight that filters through a planet's atmosphere as it passes in front of its star to learn about the composition of the atmosphere. Molecular species present in the planet's atmosphere absorb light at particular wavelengths. Therefore, by using Hubble and Spitzer to view the planet and its atmosphere at a number of different wavelengths, the researchers can determine which chemical compounds are present.

To date, nearly two dozen planets have been characterized with this technique. These observations have shown that the enormous gas giant exoplanets known as hot-Jupiters have water, carbon monoxide, hydrogen, helium—and potentially carbon dioxide and methane—in their atmospheres.

However, right now super-Earths are the hot topic. Unfortunately, although hundreds of super-Earths have been found, only a few are close enough and orbiting bright enough stars for astronomers to study in this way using currently available telescopes.

The first super-Earth that the astronomical community targeted for atmospheric studies was GJ 1214b, in the constellation Ophiuchus. Based on its average density (determined from its mass and radius), it was clear from the start that the planet was not entirely rocky. However, its density could be equally well matched by either a primarily water composition or a Neptune-like composition with a rocky core surrounded by a thick gas envelope. Information about the atmosphere could help astronomers determine which one it was: a mini-Neptune's atmosphere should contain lots of molecular hydrogen, while a water world's atmosphere should be water dominated.

GJ 1214b has been a popular target for the Hubble Space Telescope since its discovery in 2009. Disappointingly, after a first Hubble campaign led by researchers at the Harvard-Smithsonian Center for Astrophysics, the spectrum came back featureless—there were no chemical signatures in the atmosphere. After a second set of more sensitive observations led by researchers at the University of Chicago returned the same result, it became clear that a high cloud deck must be masking the signature of absorption from the planet's atmosphere.

"It's exciting to know that there are clouds on the planet, but the clouds are getting in the way of what we actually wanted to know, which is what is this super-Earth made of?" explains Knutson.

Now Knutson's team has studied a second super-Earth: HD 97658b, in the constellation Leo. They report their findings in the current issue of The Astrophysical Journal. The researchers used Hubble to measure the decrease in light when the planet passed in front of its parent star over a range of infrared wavelengths in order to detect small changes caused by water vapor in the planet's atmosphere.

However, again the data came back featureless. One explanation is that HD 97658b is also enveloped in clouds. However, Knutson says, it is also possible that the planet has an atmosphere that is lacking hydrogen. Because such an atmosphere could be very compact, it would make the telltale fingerprints of water vapor and other molecules very small and hard to detect. "Our data are not precise enough to tell whether it's clouds or the absence of hydrogen in the atmosphere that's causing the spectrum to be flat," she says. "This was just a quick first look to give us a rough idea of what the atmosphere looked like. Over the next year, we will use Hubble to observe this planet again in more detail. We hope those observations will provide a clear answer to the current mystery."

It appears that clouds are going to continue to pose a real challenge in studies of super-Earths, so Knutson and other researchers are working to understand the composition of the clouds around these planets and the conditions under which they form. The hope is that they will get to the point where they can predict which worlds will be shrouded in clouds. "If we can then target planets that we think should be cloud-free, that will help us make optimal use of Hubble's time," she says.

Looking to the future, Knutson says there is only one more known super-Earth that can be targeted for atmospheric studies with current telescopes. But new surveys, such as NASA's extended Kepler K2 mission and the Transiting Exoplanet Survey Satellite (TESS), slated for launch in 2017, should identify a large sample of new targets.

Of course, she says, astronomers would love to study exoplanets the size of Earth, but these worlds are just a bit too small and too difficult to observe with Hubble and Spitzer. NASA's James Webb Space Telescope, which is scheduled for launch in 2018, will provide the first opportunity to study more Earth-like worlds. "Super-Earths are at the edge of what we can study right now," Knutson says. "But super-Earths are a good consolation prize—they're interesting in their own right, and they give us a chance to explore new kinds of worlds with no analog in our own solar system."

Writer: 
Kimm Fesenmaier
Contact: 
Writer: 
Exclude from News Hub: 
No
News Type: 
Research News

Pages

Subscribe to RSS - GPS