Clouds discovered on Saturn's moon Titan

Teams of astronomers at the California Institute of Technology and at the University of California, Berkeley, have discovered methane clouds near the south pole of Titan, resolving a fierce debate about whether clouds exist amid the haze of the moon's atmosphere.

The new observations were made using the W. M. Keck II 10-meter and the Gemini North 8-meter telescopes atop Hawaii's Mauna Kea volcano in December 2001. Both telescopes are outfitted with adaptive optics that provide unprecedented detail of features not seen even by the Voyager spacecraft during its flyby of Saturn and Titan.

The results are being published by the Caltech team in the December 19 issue of Nature and by the UC Berkeley and NASA Ames team in the December 20 issue of the Astrophysical Journal.

Titan is Saturn's largest moon, larger than the planet Mercury, and is the only moon in our solar system with a thick atmosphere. Like Earth's atmosphere, the atmosphere on Titan is mostly nitrogen. Unlike Earth, Titan is inhospitable to life due to the lack of atmospheric oxygen and its extremely cold surface temperatures (-183 degrees Celsius, or -297 degrees Fahrenheit). Along with nitrogen, Titan's atmosphere contains a significant amount of methane.

Earlier spectroscopic observations hinted at the existence of clouds on Titan, but gave no clue as to their location. These early data were hotly debated, since Voyager spacecraft measurements of Titan appeared to show a calm and cloud-free atmosphere. Furthermore, previous images of Titan had failed to reveal clouds, finding only unchanging surface markings and very gradual seasonal changes in the haziness of the atmosphere.

Improvements in the resolution and sensitivity achievable with ground-based telescopes led to the present discovery. The observations used adaptive optics, in which a flexible mirror rapidly compensates for the distortions caused by turbulence in Earth's atmosphere. These distortions are what cause the well-known twinkling of the stars. Using adaptive optics, details as small as 300 kilometers across can be distinguished at the enormous distance of Titan (1.3 billion kilometers), equivalent of reading an automobile license plate from 100 kilometers away.

The images presented by the two teams clearly show bright clouds near Titan's south pole.

"We see the intensity of the clouds varying over as little as a few hours," said post-doctoral fellow Henry Roe, lead author for the UC Berkeley group. "The clouds are constantly changing, although some persist for as long as a few days."

Titan experiences seasons much like Earth, though its year is 30 times longer due to Saturn's distant orbit from the sun. Titan is currently in the midst of southern summer, and the south pole has been in continuous sunlight for over six Earth years. The researchers believe that this fact may explain the location of the large clouds.

"These clouds appear to be similar to summer thunderstorms on Earth, but formed of methane rather than water. This is the first time we have found such a close analogy to the Earth's atmospheric water cycle in the solar system," says Antonin Bouchez, one of the Caltech researchers.

In addition to the clouds above Titan's south pole, the Keck images, like previous data, reveal the bright continent-sized feature that may be a large icy highland on Titan's surface, surrounded by linked dark regions that are possibly ethane seas or tar-covered lowlands.

"These are the most spectacular images of Titan's surface which we've seen to date," says Michael Brown, associate professor of planetary astronomy and lead author of the Caltech paper. "They are so detailed that we can almost begin to speculate about Titan's geology, if only we knew for certain what the bright and dark regions represented."

In 2004, Titan will be visited by NASA's Cassini spacecraft, which will look for clouds on Titan during its multiyear mission around Saturn. "Changes in the spatial distribution of these clouds over the next Titan season will help pin down their detailed formation process," says Imke de Pater, professor of astronomy at UC Berkeley. The Cassini mission includes a probe named Huygens that will descend by parachute into Titan's atmosphere and land on the surface near the edge of the bright continent.

The team conducting the Gemini observations consists of Roe and de Pater from UC Berkeley, Bruce A. Macintosh of Lawrence Livermore National Laboratory, and Christopher P. McKay of the NASA Ames Research Center. The team reporting results from the Keck telescope consists of Brown and Bouchez of Caltech and Caitlin A. Griffith of the University of Arizona.

The Gemini observatory is operated by the Association of Universities for Research in Astronomy under a cooperative agreement with the National Science Foundation, involving NOAO/AURA/NSF as the U.S. partner. The W.M. Keck Observatory is operated by the California Association for Research in Astronomy, a scientific partnership between the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. This research has been funded in part by grants from NSF and NASA.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

New Theory Accounts for Existence of Binaries in Kuiper Belt

PASADENA, Calif.--In the last few years, researchers have discovered more than 500 objects in the Kuiper belt, a gigantic outer ring in the outskirts of the solar system, beyond the orbit of Neptune. Of these, seven so far have turned out to be binaries--two objects that orbit each other. The surprise is that these binaries all seem to be pairs of widely separated objects of similar size. This is surprising because more familiar pairings, such as the Earth/moon system, tend to be unequal in size and/or rather close together.

To account for these oddities, scientists from the California Institute of Technology have devised a theory of Kuiper belt binary formation. Their work is published in the December 12 issue of the journal Nature.

According to Re'em Sari, a senior research fellow at Caltech, the theory will be tested in the near future as additional observations of Kuiper belt objects are obtained and additional binaries are discovered. The other authors of the paper are Peter Goldreich, DuBridge Professor of Astrophysics and Planetary Physics at Caltech; and Yoram Lithwick, now a postdoc at UC Berkeley.

"The binaries we are more familiar with, like the Earth/moon system, resulted from collisions that ejected material," says Sari. "That material coalesced to form the smaller body. Then the interaction between the spin of the larger body and the orbit of the smaller body caused them to move farther and farther apart."

"This doesn't work for the Kuiper belt binaries," Sari says. "They are too far away from each other to have ever had enough spin for this effect to take place." The members of the seven binaries are about 100 kilometers in radius, but 10,000 to 100,000 kilometers from each other. Thus their separations are 100 to 1,000 times their radii. By contrast, Earth is about 400,000 kilometers from the moon, and about 6,000 kilometers in radius. Even at a distance of 60 times the radius of Earth, the tidal mechanism works only because the moon is so much less massive than Earth.

Sari and his colleagues think the explanation is that the Kuiper belt bodies tend to get closer together as time goes on -- exactly the reverse of the situation with the planets and their satellites, where the separations tend to increase. "The Earth/moon system evolves 'inside-out', but the Kuiper belt binaries evolved 'outside-in,'" explains Sari.

Individual objects in the Kuiper belt are thought to have formed in the early solar system by accretion of smaller objects. The region where the gravitational influence of a body dominates over the tidal forces of the sun is known as its Hill sphere. For a 100-kilometer body located in the Kuiper belt, this extends to about a million kilometers. Large bodies can accidentally pass through one another's Hill spheres. Such encounters last a couple of centuries and, if no additional process is involved, the "transient binary" dissolves, and the two objects continue on separate orbits around the sun. The transient binary must lose energy to become bound. The researchers estimate that in about 1 in 300 encounters, a third large body would have absorbed some of the energy and left a bound binary. An additional mechanism for energy loss is gravitational interaction with the sea of small bodies from which the large bodies were accreting. This interaction slows down the large bodies. Once in every 30 encounters, they slowed down sufficiently to become bound.

Starting with a binary of large separation a million kilometers apart, continued interaction with the sea of small objects would have led to additional loss of energy, tightening the binary. The time required for the formation of individual objects is sufficient for a binary orbit to shrink all the way to contact. Indeed, the research predicts that most binaries coalesced in this manner or at least became very tight. But if the binary system was formed relatively late, close to the time that accretion in the Kuiper belt ceased, a widely separated binary would survive. These are the objects we observe today. By this mechanism it can be predicted that about 5 percent of objects remain with large enough separation to be observed as a binary. The prediction is in agreement with recent surveys conducted by Caltech associate professor of planetary astronomy Mike Brown. The majority of objects ended up as tighter binaries. Their images cannot be distinguished from those of isolated objects when observed from Earth using existing instruments.

These ideas will be more thoroughly tested as additional objects are discovered and further data is collected. Further theoretical work could predict how the inclination of a binary orbit, relative to the plane of the solar system, evolves as the orbit shrinks. If it increases, this would suggest that the Pluto/Charon system, although tight, was also formed by the 'outside-in' mechanism, since it is known to have large inclination.

Writer: 
Robert Tindol
Writer: 

Earthbound experiment confirms theory accounting for sun's scarcity of neutrinos

PASADENA, Calif.- In the subatomic particle family, the neutrino is a bit like a wayward red-haired stepson. Neutrinos were long ago detected-and even longer ago predicted to exist-but everything physicists know about nuclear processes says there should be a certain number of neutrinos streaming from the sun, yet there are nowhere near enough.

This week, an international team has revealed that the sun's lack of neutrinos is a real phenomenon, probably explainable by conventional theories of quantum mechanics, and not merely an observational quirk or something unknown about the sun's interior. The team, which includes experimental particle physicist Robert McKeown of the California Institute of Technology, bases its observations on experiments involving nuclear power plants in Japan.

The project is referred to as KamLAND because the neutrino detector is located at the Kamioka mine in Japan. Properly shielded from radiation from background and cosmic sources, the detector is optimized for measuring the neutrinos from all 17 nuclear power plants in the country.

Neutrinos are produced in the nuclear fusion process, when two protons fuse together to form deuterium, a positron (in other words, the positively charged antimatter equivalent of an electron), and a neutrino. The deuterium nucleus hangs nearby, while the positron eventually annihilates both itself and an electron. The neutrino, being very unlikely to interact with matter, streams away into space.

Therefore, physicists would normally expect neutrinos to flow from the sun in much the same way that photons flow from a light bulb. In the case of the light bulb, the photons (or bundles of light energy) are thrown out radially and evenly, as if the surface of a surrounding sphere were being illuminated. And because the surface area of a sphere increases by the square of the distance, an observer standing 20 feet away sees only one-fourth the photons of an observer standing at 10 feet.

Thus, observers on Earth expect to see a given number of neutrinos coming from the sun-assuming they know how many nuclear reactions are going on in the sun-just as they expect to know the luminosity of a light bulb at a given distance if they know the bulb's wattage. But such has not been the case. Carefully constructed experiments for detecting the elusive neutrinos have shown that there are far fewer neutrinos than there should be.

A theoretical explanation for this neutrino deficit is that the neutrino "flavor" oscillates between the detectable "electron" neutrino type, and the much heavier "muon" neutrino and maybe even the "tau" neutrino, neither of which can be detected. Utilizing quantum mechanics, physicists estimate that the number of detectable electron neutrinos is constantly changing in a steady rhythm from 100 percent down to a small percentage and back again.

Therefore, the theory says that the reason we see only about half as many neutrinos from the sun as we should be seeing is because, outside the sun, about half the electron neutrinos are at that moment one of the undetectable flavors.

The triumph of the KamLAND experiment is that physicists for the first time can observe neutrino oscillations without making assumptions about the properties of the source of neutrinos. Because the nuclear power plants have a very precisely known amount of material generating the particles, it is much easier to determine with certainty whether the oscillations are real or not.

Actually, the fission process of the nuclear plants is different from the process in the sun in that the nuclear material breaks apart to form two smaller atoms, plus an electron and an antineutrino (the antimatter equivalent of a neutrino). But matter and antimatter are thought to be mirror-images of each other, so the study of antineutrinos from the beta-decays of the nuclear power plants should be exactly the same as a study of neutrinos.

"This is really a clear demonstration of neutrino disappearance," says McKeown. "Granted, the laboratory is pretty big-it's Japan-but at least the experiment doesn't require the observer to puzzle over the composition of astrophysical sources.

"Willy Fowler [the late Nobel Prize-winning Caltech physicist] always said it's better to know the physics to explain the astrophysics, rather than vice versa," McKeown says. "This experiment allows us to study the neutrino in a controlled experiment."

The results announced this week are taken from 145 days of data. The researchers detected 54 events during that time (an event being a collision of an antineutrino with a proton to form a neutron and positron, ultimately resulting in a flash of light that could be measured with photon detectors). Theory predicted that about 87 antineutrinos would have been seen during that time, if no oscillations occurred, but 54 events at an average distance of 175 kilometers if the oscillation is a real phenomenon.

According to McKeown, the experiment will run about three to five years, with experimentalists ultimately collecting data for several hundred events. The additional information should provide very accurate measurements of the energy spectrum predicted by theory when the neutrinos oscillate.

The experiment may also catch neutrinos if any supernovae occur in our galaxy, as well as neutrinos from natural events in Earth's interior.

In addition to McKeown's team at Caltech's Kellogg Radiation Lab, other partners in the study include the Research Center for Neutrino Science at Tohuku University in Japan, the University of Alabama, the University of California at Berkeley and the Lawrence Berkeley National Laboratory, Drexel University, the University of Hawaii, the University of New Mexico, Louisiana State University, Stanford University, the University of Tennessee, Triangle Universities Nuclear Laboratory, and the Institute of High Energy Physics in Beijing.

The project is supported in part by the U.S. Department of Energy.

 

 

Writer: 
Jill Perry
Writer: 

Caltech Professor to Explore Abrupt Climate Changes

PASADENA, Calif.—By analyzing stalagmites from caves in Sarawak, which is the Malaysian section of Borneo and the location of one of the world's oldest rain forests, and by studying deep-sea corals from the North Atlantic Ocean, California Institute of Technology researcher Jess Adkins will explore the vital link between the deep ocean, the atmosphere, and abrupt changes in global climates.

The project, "Linking the Atmosphere and the Deep Ocean during Abrupt Climate Changes," is funded by the Comer Science and Educational Foundation.

Because the Sarawak stalagmites and the deep-sea corals are uranium rich and can be dated precisely, and because they both accumulate continuously, uninterrupted by "bioturbation," the biological process that mixes the upper several centimeters of ocean sediments, they provide unique archives of climate history. By utilizing these archives, Adkins and his research group will be able to chart and link major climate variables, and thereby provide critical insight into understanding rapid climate changes that could impact the earth.

Adkins, an assistant professor of geochemistry and global environmental science, joined Caltech in 2000. He received his PhD in 1998 from the Massachusetts Institute of Technology Woods Hole Oceanographic Institute.

The Comer Science and Education Foundation was established to promote education and discovery through scientific exploration.

Contact: Deborah Williams-Hedges (626) 395-3227 debwms@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media

###

Writer: 
DWH
Writer: 

New study describes workings of deep oceanduring the Last Glacial Maximum

Scientists know quite a bit about surface conditions during the Last Glacial Maximum (LGM), a period that peaked about 18,000 years ago, when ice covered significant portions of Canada and northern Europe.

But to really understand the mechanisms involved in climate change, scientists need to have detailed knowledge of the interaction between the ocean and the atmosphere. And until now, a key component of that knowledge has been lacking for the LGM because of limited understanding of the glacial deep ocean.

In a paper published in the November 29 issue of the journal Science, researchers from the California Institute of Technology and Harvard University report the first measurements for the temperature-salinity distribution of the glacial deep ocean. The results show unexpectedly that the basic mechanism of the distribution was different during icy times.

"You can think of the global ocean as a big bathtub, with the densest water at bottom and the lightest at top," explains Jess Adkins, an assistant professor of geochemistry and global environmental science at Caltech and lead author of the paper. Because water that is cold or salty--or both--is dense, it tends to flow downward in a vertical circulation pattern, much like water falling down the sides of the bathtub, until it finds its correct density level. In the ocean today, this circulation mechanism tends to be dominated by the temperature of the water.

In studying chlorine data from four ocean drilling program sites, the researchers found that the glacial deep ocean's circulation was set by the salinity of the water. In addition, a person walking on the ocean bottom from north to south, 18,000 years ago, would have found that the water tended to get saltier as he proceeded (within an acceptable margin of error, both north and south waters were the same temperature). Taking that into account, the water in the north would have been less dense. The exact reverse is true today, with the waters at low southern latitudes being very cold and relatively fresh, while those in the high northern latitudes being warmer and saltier.

Adkins says there is a good explanation for the change. The seawater "equation of state" dictates that the density of water near the freezing point is about two-to-three times more sensitive to changes in salinity relative to changes in temperature, as compared to today's warmer deep waters.

So, the equation demands that the density-layering of the ocean "bathtub" be set by the water's salt content at the last glacial maximum. Temperature is still crucial, in that colder waters are more sensitive to salinity changes than warmer water, but Adkin's results show that the deep water circulation mechanism must have operated in a fundamentally different manner in the past.

"This observation of the deep ocean seems like a strange place to go to study Earth's climate, but this is where you find most of the mass and thermal inertia of the climate system," Adkins says.

The ocean's water temperature enters into the complex mechanism affecting the climate, with water moving about in order for the ocean to equalize its temperature. Too, the water and air interact to further complicate the weather equation.

Thus, the results from the glacial deep ocean shows that the climate in those days was operating in a very different way, Adkins says. "Basically, the purpose of this study is to understand the mechanisms of climate change."

In addition to Adkins, the other authors are Katherine McIntyre, a postdoctoral scholar in geochemistry at Caltech; and Daniel P. Schrag of the Department of Earth and Planetary Sciences at Harvard University.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Rupture of Denali fault responsible for 7.9-magAlaskan earthquake of November 3

Geologists just back from a reconnaissance of the 7.9-magnitude Alaska earthquake of November 3 confirm that rupture of the Denali fault was the principal cause of the quake.

According to Caltech geology professor Kerry Sieh, Central Washington University geological sciences professor Charles Rubin, and Peter Haeussler of the U.S. Geological Survey, investigations over a week-long period revealed three large ruptures with a total length of about 320 kilometers. The principal rupture was a 210-kilometer-long section of the Denali fault, with horizontal shifts of up to nearly 9 meters (26 feet). This places the rupture in the same class as those that produced the San Andreas fault's two historical great earthquakes in 1906 and 1857. These three ruptures are the largest such events in the Western Hemisphere in at least the past 150 years.

Like California's San Andreas, the Denali is a strike-slip fault, which means that the blocks on either side of the fracture move sideways relative to one another. Over millions of years, the cumulative effect of tens of thousands of large shifts has been to move southern Alaska tens of kilometers westward relative to the rest of the state. These shifts have produced a set of large aligned valleys that arch through the middle of the snowy Alaska range, from the Canadian border on the east to the foot of Mount McKinley on the west. Along much of its length the great fracture traverses large glaciers. Surprisingly, the fault broke up through the glaciers, offsetting large crevasses and rocky ridges within the ice.

At the crossing of the Trans-Alaska pipeline, approximately in the center of the 320-kilometer rupture, the horizontal shift was about 4 meters. Fortunately, geological studies of the fault prior to construction led to a special design that would have allowed for shifts greater than this without failure of the pipeline.

The earthquake shook loose thousands of snow avalanches and rock falls in the rugged terrain adjacent to the fault. Although most of these measured only a few tens of meters in dimension, many were much larger. In some places enormous blocks of rock and ice fell onto glaciers and valley floors, skidding a kilometer or more out over ice, stream, and tundra.

The team of investigators included geologists from several organizations, including Caltech's Division of Geological and Planetary Sciences, the U.S. Geological Survey, Central Washington University, and the University of Alaska. The rugged range is traversed by just two highways, and so the scientists used helicopters to access the fault ruptures in the remote and rugged terrain.

Before departing for the field, the geologists had learned from seismologists the basic character of the rupture. Within a day of the quake, Caltech seismologist Chen Ji had determined that the shift along the fault was principally horizontal, but that the initial 20 seconds of the eastward-propagating crack was along a fault with vertical motion. This fault was discovered midweek, near the western end of the principal horizontal shift. Along this 40-kilometer-long fault, a portion of the Alaska range has risen several meters.

Perhaps the most surprising discovery in the field was that the fault rupture propagated only eastward from the epicenter and left the western half of the great fault unbroken. Several members of the team wonder if, in fact, this great earthquake is the first in a series of large events that will eventually include breaks farther west toward Mount McKinley and Denali National Park.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Cellular choreography, not molecular prepattern, creates repeated segments of vertebrate embryo

In a study that combines state-of-the-art biological imaging with gene expression analysis, scientists at the California Institute of Technology have uncovered a fundamental insight into the way embryonic cells and tissue move about to form key structures along the vertebrate axis. The study, which could lead to a better understanding of human development, takes advantage of the accessibility of chick embryos to embryonic manipulation.

The study by Caltech biologists Scott Fraser and Paul Kulesa, appearing in the November 1 issue of the journal Science, centers on segments known as somites, which form along either side of the future spinal cord of an embryo. Somites give rise to mature structures such as ribs, individual vertebrae, and even skin. The key role of somite segmentation in the patterning of the nervous system and the vertebral column has been long known. But the question of precisely how an individual somite buds off from a block of tissue in a pattern that is repeated all along the animal's torso, from head to tail, is poorly understood.

"Developmental biologists have had a difficult time getting a handle on how cell movements and gene expression patterns are coordinated to form complex structures, in this case the segmented units called somites," says Kulesa, a postdoctoral scholar in Fraser's lab and lead author of the paper. "The problems have been due to limitations in obtaining cellular resolution of tissue deep within living vertebrate embryos and difficulty in coordinating the cell movements and tissue shaping in living tissue with gene expression patterns typically obtained at one time point from fixed, non-living tissue."

The new insight of the paper is that the factors that determine the embryo's ultimate form as well as the eventual position of its cells involve a complicated set of motions of the cells themselves. Previous models of embryonic patterning had suggested that there was a molecular prepattern that subdivided the tissues, somewhat like a "paint-by-numbers" piece of art. The study thus shows the action of a more complex coordination between physical forces within the tissue and gene expression patterns that determine where an embryonic cell will go and what type of structure it will help form.

Kulesa and Fraser's study is made possible with a new culture technique combined with confocal time-lapse microscopy, an advanced form of imaging that allows the tissue of a living, developing embryo to be studied in intricate detail at the cellular level. Time-lapse imaging involves, first, labeling the tissue so that it will fluoresce when exposed to laser light, then passing a laser through the tissue, then reconstructing the fluorescent patterns of individual cells to form a three-dimensional microscopic image. The laser scans over the tissue of the developing embryo every minute or so, which allows the researchers to gather the hundreds of images taken during a several-hour run into a time-lapse video.

Using fertilized eggs, the researchers placed an embryo into a specially designed chamber to allow for high-resolution time-lapse imaging, and afterwards performed gene expression analyses on the same embryo. Thus, they were both videotaping cell movements for 6-to-12 hours as well as analyzing the expression of several genes, including EphA4 and c-Meso1, both thought to play a role in determining future somite boundary sites.

The results showed that the straight-line patterns of gene expression, which were thought to correlate with a simple, periodic slicing of the tissue into blocks, did not predict the complex cell movements. Time-lapse imaging showed that a ball-and-socket separation of tissue takes places in a series of six repeatable steps.

"It turns out that a somite pulls apart from the block of tissue, and cells move in anterior and posterior directions near the forming somite boundary," Kulesa says. "This is contrary to many models of somite segmentation which assume that gene expression boundaries that correlate with presumptive somite boundaries allocate cells into a particular block with very little cell movement.

"This study tells us that we have to be careful about assuming that gene expression patterns strictly determine a cell's fate and position."

Kulesa says the next step is to do the work in mouse embryos, which pose considerably more difficult challenges for developmental imaging, but have a tremendous advantage over chick-embryo imaging in attempting to isolate the role of key genes through gene manipulation.

 

 

Writer: 
Robert Tindol
Writer: 

Caltech scientists find largest object in solar system since Pluto's discovery

Planetary scientists at the California Institute of Technology have discovered a spherical body in the outskirts of the solar system. The object circles the sun every 288 years, is half the size of Pluto, and is larger than all of the objects in the asteroid belt combined.

The object has been named "Quaoar" (pronounced KWAH-o-ar) after the creation force of the Tongva tribe who were the original inhabitants of the Los Angeles basin, where the Caltech campus is located. Quaoar is located about 4 billion miles from Earth in a region beyond the orbit of Pluto known as the Kuiper belt. This is the region where comets originate and also where planetary scientists have long expected to eventually find larger planet-shaped objects such as Quaoar. The discovery, announced at the meeting of the Division of Planetary Sciences of the American Astronomical Society in Birmingham, Alabama, today, is by far the largest object found so far in that search.

Currently detectable a few degrees northwest of the constellation Scorpio, Quaoar demonstrates beyond a doubt that large bodies can indeed be found in the farthest reaches of the solar system. Further, the discovery provides hope that additional large bodies in the Kuiper belt will be discovered, some as large, or even larger than Pluto. Also, Quaoar and other bodies like it should provide new insights into the primordial materials that formed the solar system some 5 billion years ago.

The discovery further supports the ever-growing opinion that Pluto itself is a Kuiper belt object. According to recent interpretations, Pluto was the first Kuiper belt object to be discovered, long before the age of enhanced digital techniques and charge-coupled (CCD) cameras, because it had been kicked into a Neptune-crossing elliptical orbit eons ago.

"Quaoar definitely hurts the case for Pluto being a planet," says Caltech planetary science associate professor Mike Brown. "If Pluto were discovered today, no one would even consider calling it a planet because it's clearly a Kuiper belt object."

Brown and Chad Trujillo, a postdoctoral researcher, first detected Quaoar on a digital sky image taken on June 4 with Palomar Observatory's 48-inch Oschin Telescope. The researchers looked through archived images taken by a variety of instruments and soon found images taken in the years 1983, 1996, 2000, and 2001. These images not only allowed Brown and Trujillo to establish the distance and orbital inclination of Quaoar, but also to determine that the body is revolving around the sun in a remarkably stable, circular orbit.

"It's probably been in this same orbit for 4 billion years," Brown says.

The discovery of Quaoar is not so much a triumph of advanced optics as of modern digital analysis and a deliberate search methodology. In fact, Quaoar apparently was first photographed in 1982 by then-Caltech astronomer Charlie Kowal in a search for the postulated "Planet X." Kowal unfortunately never found the object on the plate—much less Planet X—but left the image for posterity.

Because the precise location of Quaoar on the old plates is highly predictable, the orbit is thought to be quite circular for a solar system body, and far more circular than that of Pluto. In fact, Pluto is relatively easy to spot—at least if one knows where to look. Because Pluto comes so close to the sun for several years in its 248-year eccentric orbit, the volatile substances in the atmosphere are periodically heated, thereby increasing the body's reflectance, or albedo, to such a degree that it is bright enough to be seen even in small amateur telescopes.

Quaoar, on the other hand, never approaches the sun in its circular orbit, which means that the volatile gases never are excited enough to kick up a highly reflective atmosphere. As is the case for other bodies of similar rock-and-ice composition, Quaoar's surface has been bathed by faint ultraviolet radiation from the sun over the eons, and this radiation has slowly caused the organic materials on the body's surface to turn into a dark tar-like substance.

As a result, Quaoar's albedo is about 10 percent, just a bit higher than that of the moon. By contrast, Pluto's albedo is 60 percent.

As for spin rate, the researchers know that Quaoar is rotating because of slight variations in reflectance in the six weeks they've observed the body. But they're still collecting data to determine the precise rate. They will also probably be able to figure out whether the spin axis is tilted relative to the ecliptical plane.

Inclination is about 7.9 percent, which means that the plane of Quaoar's orbit is tilted by 7.9 degrees from the relatively flat orbital plane in which all the planets except Pluto are to be found. Pluto's orbital inclination is about 17 degrees, which presumably resulted from whatever gravitational interference originally thrust it into an elliptical orbit.

Quaoar's orbital inclination of 7.9 degrees is not particularly surprising, Brown says, because the Kuiper belt is turning out to be wider than originally expected. The Kuiper belt can be thought of as a band extending around the sky, superimposed on the path of the sun. Brown and Trujillo's research, in effect, is to take repeated exposures of a several-degree swath of this band and then use digital equipment to check and see if any tiny point of light has moved relative to the stellar background.

Brown and Trujillo are currently using about 10 to 20 percent of the available time on the 48-inch Oschin Telescope, which was used to obtain both the Palomar Sky Survey and the more recent Palomar Digital Sky Survey. The latter was completed just last year, thus freeing up the Oschin Telescope to be refitted by the Jet Propulsion Laboratory for a new mission to search for near-Earth asteroids. About 80 percent of the telescope time is now designated for the asteroid survey, leaving the remainder for scientific studies like Brown and Trujillo's.

Since the discovery, the researchers have also employed other telescopes to study and characterize Quaoar, including the Hubble Space Telescope (related news release available at link below) and the Keck Observatory on Mauna Kea, Hawaii. Information derived from these studies will provide new insights into the precise composition of Quaoar and may answer questions about whether the body has a tenuous atmosphere.

But the good news for the serious amateur astronomer is that he or she doesn't necessarily need a space telescope or 10-meter reflector to get a faint image of Quaoar. Armed with precise coordinates and a 16-inch telescope fitted with a CCD camera—the kind advertised in magazines such as Sky and Telescope and Astronomy—an amateur should be able to obtain images on successive nights that will show a faint dot of light in slightly different positions.

As for Brown and Trujillo, the two are continuing their search for other large Kuiper-belt bodies. Some, in fact, may be even larger than Quaoar.

"Right now, I'd say they get as big as Pluto," says Brown.

 

Writer: 
EN
Writer: 

Caltech researchers devisenew microdevice for fluid analysis

Researchers at the California Institute of Technology announced today a new paradigm for large-scale integration of microfluidic devices. Using new techniques, they built chips with as many as 6,000 microvalves and up to 1,000 tiny individual chambers.

The technology is being commercialized by Fluidigm in San Francisco, which is using multi-layer soft lithography (MSL) techniques to create microfluidic chips to run the smallest-volume polymerase chain reactions documented—20,000 parallel reactions at volumes of 100 picoliters.

In a paper to appear in the journal Science, Caltech associate professor of applied physics and physics Stephen Quake and his colleagues describe the research on picoliter-scale chambers. Quake's team describes the 1,000 individually addressable chambers, and also demonstrates on a separate device with more than 2,000 microvalves, that two different reagents can be separately loaded to perform distinct assays in two subnanoliter chambers and then recover the contents of a single chamber.

According to Quake, who cofounded Fluidigm, the devices should have many new scientific, commercial, and biomedical applications. "We now have the tools in hand to design complex microfluidic systems and, through switchable isolation, recover contents from a single chamber for further investigation."

"Together, these advancements speak to the power of MSL technology to achieve large-scale integration and the ability to make a commercial impact in microfluidics," said Gajus Worthington, President and CEO of Fluidigm. "PCR is the cornerstone of genomics applications. Fluidigm's microprocessor, coupled with the ability to recover results from the chip, offers the greatest level of miniaturization and integration of any platform," added Worthington.

Fluidigm hopes to leverage these advancements as it pursues genomics and proteomics applications. Fluidigm has already shipped a prototype product for protein crystallization that transforms decades-old methodologies to a chip-based format, vastly reducing sample input requirements and improving cost and labor by orders of magnitude.

Contact: Robert Tindol (626) 395-3631 t

Writer: 
RT

Humans and chimps have 95 percent DNA compatibility, not 98.5 percent, research shows

Genetic studies for decades have estimated that humans and chimpanzees possess genomes that are about 98.5 percent similar. In other words, of the three billion base pairs along the DNA helix, nearly 99 of every 100 would be exactly identical.

However, new work by one of the co-developers of the method used to analyze genetic similarities between species says the figure should be revised downward to 95 percent.

Roy Britten, a biologist at the California Institute of Technology, reports in the current issue of the journal Proceedings of the National Academy of Sciences that the large amount of sequencing that has been done in recent years on both the human and chimp genomes—and improvements in the techniques themselves—allow for the issue to be revisited. In the article, he describes the method he used, which involved writing a special computer program to compare nearly 780,000 base pairs of the human genome with a similar number from the chimp genome.

To describe exactly what Britten did, it is helpful to explain the old method as it was originally used to determine genetic similarities between two species. Called hybridization, the method involved collecting tiny snips of the DNA helix from the chromosomes of the two species to be studied, then breaking the ladder-like helixes apart into strands. Strands from one species would be radioactively labeled, and then the two strands recombined.

The helix at this point would contain one strand from each species, and from there it was a fairly straightforward matter to "melt" the strands to infer the number of good base pairs. The lower the melting temperature, the less compatibility between the two species because of the lower energy required to break the bonds.

In the case of chimps and humans, numerous studies through the years have shown that there is an incidence of 1.2 to 1.76 percent base substitutions. This means that these are areas along the helix where the bases (adenine, thymine, guanine, and cytosine) do not correspond and hence do not form a bond at that point.

The problem with the old studies is that the methods did not recognize differences due to events of insertion and deletion that result in parts of the DNA being absent from the strands of one or the other species. These are different from the aforementioned substitutions. Such differences, called "indels," are readily recognized by comparing sequences, if one looks beyond the missing regions for the next regions that do match.

To accomplish the more complete survey, Britten wrote a Fortran program that did custom comparisons of strands of human and chimp DNA available from GenBank. With nearly 780,000 suitable base pairs available to him, Britten was able to better infer where the mismatches would actually be seen if an extremely long strand could be studied. Thus, the computer technique allowed Britten to look at several long strands of DNA with 780,000 potential base pairings.

As expected, he found a base substitution rate of about 1.4 percent—well in keeping with earlier reported results—but also an incidence of 3.9 percent divergence attributable to the presence of indels. Thus, he came up with the revised figure of 5 percent.

As for the implications, Britten says the new work should help biologists with future work on precisely how species branch off from each other, and why. "The basic question you would like to answer is what makes the chimp different from humans—what were the basic changes in the genome that mattered.

"A large number of these 5 percent of variations are relatively unimportant. But what matters, according to everyone's idea, is regulation of the genes, which is controlled by the genes that are actually expressed. So to address this issue, you first have to know how different the genomes are, and second, where the differences are located.

The article is available from PNAS by contacting Jill Locantore, the public information officer, at jlocantore@nas.edu, or by calling 202-334-1310.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Pages

Subscribe to RSS - research_news