Caltech Scientists Gain Fundamental Insight into How Cells Protect Genetic Blueprints

PASADENA, Calif.—Molecular biologists have known for some time that there is a so-called checkpoint control mechanism that keeps our cells from dividing until they have copied all the DNA in their genetic code. Similar mechanisms prevent cells from dividing with damaged DNA, which forms, for example, in one's skin after a sunburn. Without such genetic fidelity mechanisms, cells would divide with missing or defective genes.

Now, a California Institute of Technology team has uncovered new details of how these checkpoints work at the molecular level.

Reporting in the March 10 issue of the journal Cell, Caltech senior research associate Akiko Kumagai and her colleagues show that a protein with the unusual name "TopBP1" is responsible for activating the cascade of reactions that prohibit cells from dividing with corrupted genetic blueprints. The researchers say that their result is a key molecular insight, and could possibly lead to molecular breakthroughs in cancer therapy someday.

"The function of the checkpoint control mechanisms is to preserve the integrity of the genome," says William Dunphy, the corresponding author of the paper and a professor of biology at Caltech. "When these genetic fidelity mechanisms do not function properly, it can lead to cancer and ultimately death."

The research began with a study of a protein called ATR that was known to be a key regulator of checkpoint responses. This protein is a vital component of every eukaryotic cell (in other words, the cells of most organisms on Earth excluding bacteria). ATR is a "kinase," an enzyme that controls other proteins by modifying them with phosphate groups.

However, no one knew how the cell turns on this enzymatic activity of ATR when needed. To figure out how ATR gets activated in protecting against mutations has been one of the most urgent questions of the field for the past decade.

Acting on a hunch, the researchers decided to look at the TopBP1 protein, whose molecular function was hitherto mysterious. Strikingly, the team found that purified TopBP1 could bind directly to ATR and activate it. The activation was so quick and robust that the researchers knew immediately that they had found the long-sought activator of ATR and deciphered how cells mobilize their efforts to prevent mutations. Interestingly, the researchers found that only a small part of TopBP1 is necessary for activating ATR.

The researchers suspect that the remaining parts of TopBP1 hold additional secrets about checkpoint control mechanisms. Dunphy says that this molecular insight shows how a cancer-repressive mechanism works in a healthy cell. "Knowing how the normal system works might also help lead to insight on how to fix the system when it gets broken," he adds.

In addition to Kumagai and Dunphy, the other authors of the Cell paper are Joon Lee and Hae Yong Yoo, both senior research fellows at Caltech.

Writer: 
Robert Tindol
Writer: 

Old-World Primates Evolved Color Vision to Better See Each Other Blush, Study Reveals

PASADENA, Calif.—Your emotions can easily be read by others when you blush—at least by others familiar with your skin color. What's more, the blood rushing out of your face when you're terrified is just as telling. And when it comes to our evolutionary cousins the chimpanzees, they not only can see color changes in each other's faces, but in each other's rumps as well.

Now, a team of California Institute of Technology researchers has published a paper suggesting that we primates evolved our particular brand of color vision so that we could subtly discriminate slight changes in skin tone due to blushing and blanching. The work may answer a long-standing question about why trichromat vision (that is, color via three cone receptors) evolved in the first place in primates.

"For a hundred years, we've thought that color vision was for finding the right fruit to eat when it was ripe," says Mark Changizi, a theoretical neurobiologist and postdoctoral researcher at Caltech. "But if you look at the variety of diets of all the primates having trichromat vision, the evidence is not overwhelming."

Reporting in the current issue of the journal Biology Letters, Changizi and his coauthors show that our color cones are optimized to be sensitive to subtle changes in skin tone due to varying amounts of oxygenated hemoglobin in the blood.

The spectral sensitivity of the color cones is somewhat odd, Changizi says. Bees, for example, have four color cones that are evenly spread across the visible spectrum, with the high-frequency end extending into the ultraviolet. Birds have three color cones that are also evenly distributed in the visible spectrum.

The old-world primates, by contrast, have an "S" cone at about 440 nanometers (the wavelength of visible light roughly corresponding to blue light), an "M" cone sensitive at slightly less than 550 nanometers, and an "L" cone sensitive at slightly above 550 nanometers.

"This seems like a bad idea to have two cones so close together," Changizi says. "But it turns out that the closeness of the M and L cone sensitivities allows for an additional dimension of sensitivity to spectral modulation. Also, their spacing maximizes sensitivity for discriminating variations in blood oxygen saturation." As a result, a very slight lowering or rising of the oxygen in the blood is easily discriminated by any primate with this type of cone arrangement.

In fact, trichromat vision is sensitive not only for the perception of these subtle changes in color, but also for the perception of the absence or presence of blood. As a result, primates with trichromat vision are not only able to tell if a potential partner is having a rush of emotion due to the anticipation of mating, but also if an enemy's blood has drained out of his face due to fear.

"Also, ecologically, when you're more oxygenated, you're in better shape," Changizi adds, explaining that a naturally rosy complexion might be a positive thing for purposes of courtship.

Adding to the confidence of the hypothesis is the fact that the old-world trichromats tend to be bare-faced and bare-butted as well. "There's no sense in being able to see the slight color variations in skin if you can't see the skin," Changizi says. "And what we find is that the trichromats have bare spots on their faces, while the dichromats have furry faces."

"This could connect up with why we're the 'naked ape,'" he concludes. The few human spots that are not capable of signaling, because they are in secluded regions, tend to be hairy-such as the top of the head, the armpits, and the crotch. And when the groin occasionally does tend to exhibit bare skin, it occurs in circumstances in which a potential mate may be able to see that region.

"Our speculation is that the newly bare spots are for color signaling."

The other authors of the paper are Shinsuke Shimojo, a professor of biology at Caltech who specializes in psychophysics; and Qiong Zhang, an undergraduate at Caltech.

 

 

Writer: 
Robert Tindol
Writer: 

Study of 2004 Tsunami Disaster Forces Rethinking of Theory of Giant Earthquakes

PASADENA, Calif.—The Sumatra-Andaman earthquake of December 26, 2004, was one of the worst natural disasters in recent memory, mostly on account of the devastating tsunami that followed it. A group of geologists and geophysicists, including scientists at the California Institute of Technology, has delineated the full dimensions of the fault rupture that caused the earthquake.

Their findings, reported in the March 2 issue of the journal Nature, suggest that previous ideas about where giant earthquakes are likely to occur need to be revised. Regions of the earth previously thought to be immune to such events may actually be at high risk of experiencing them.

Like all giant earthquakes, the 2004 event occurred on a subduction megathrust-in this case, the Sunda megathrust, a giant earthquake fault, along which the Indian and Australian tectonic plates are diving beneath the margin of southeast Asia. The fault surface that ruptured cannot be seen directly because it lies several kilometers deep in the Earth's crust, largely beneath the sea.

Nevertheless, the rupture of the fault caused movements at the surface as long-accumulating elastic strain was suddenly released. The researchers measured these surface motions by three different techniques. In one, they measured the shift in position of GPS stations whose locations had been accurately determined prior to the earthquake.

In the second method, they studied giant coral heads on island reefs: the top surfaces of these corals normally lie right at the water surface, so the presence of corals with tops above or below the water level indicated that the Earth's crust rose or fell by that amount during the earthquake.

Finally, the researchers compared satellite images of island lagoons and reefs taken before and after the earthquake: changes in the color of the seawater or reefs indicated a change in the water's depth and hence a rise or fall of the crust at that location.

On the basis of these measurements the researchers found that the 2004 earthquake was caused by rupture of a 1,600-kilometer-long stretch of the megathrust-by far the longest of any recorded earthquake. The breadth of the contact surface that ruptured ranged up to 150 kilometers. Over this huge contact area, the surfaces of the two plates slid against each other by up to 18 meters.

On the basis of these data, the researchers calculated that the so-called moment-magnitude of the earthquake (a measure of the total energy released) was 9.15, making it the third largest earthquake of the past 100 years and the largest yet recorded in the few decades of modern instrumentation.

"This earthquake didn't just break all the records, it also broke some of the rules," says Kerry Sieh, who is the Sharp Professor of Geology at Caltech and one of the authors of the Nature paper.

According to previous understanding, subduction megathrusts can only produce giant earthquakes if the oceanic plate is young and buoyant, so that it locks tightly against the overriding continental plate and resists rupture until an enormous amount of strain has accumulated.

Another commonly accepted idea is that the rate of relative motion between the colliding plates must be high for a giant earthquake to occur. Both these conditions are true off the southern coast of Chile, where the largest earthquake of the past century occurred in 1960. They are also true off the Pacific Northwest of the United States, where a giant earthquake occurred in 1700 and where another may occur before long.

But at the site of the 2004 Sumatra-Andaman earthquake the oceanic crust is old and dense, and the relative motion between the plates is quite slow. Yet another factor that should have lessened the likelihood of a giant earthquake in the Indian Ocean is the fact that the oceanic crust is being stretched by formation of a so-called back-arc basin off the continental margin.

"For all these reasons, received wisdom said that the giant 2004 earthquake should not have occurred," says Jean-Philippe Avouac, a Caltech professor of geology, who is also a contributor to the paper. "But it did, so received wisdom must be wrong. It may be, for example, that a slow rate of motion between the plates simply causes the giant earthquakes to occur less often, so we didn't happen to have seen any in recent times-until 2004."

Many subduction zones that were not considered to be at risk of causing giant earthquakes may need to be reassessed as a result of the 2004 disaster. "For example, the Ryukyu Islands between Taiwan and Japan are in an area where a large rupture would probably cause a tsunami that would kill a lot of people along the Chinese coast," says Sieh.

"And in the Caribbean, it could well be an error to assume that the entire subduction zone from Trinidad to Barbados and Puerto Rico is aseismic. The message of the 2004 earthquake to the world is that you shouldn't assume that your subduction zone, even though it's quiet, is incapable of generating great earthquakes."

According to Sieh, it's not that all subduction zones should now be assigned a high risk of giant earthquakes, but that better monitoring systems-networks of continuously recording GPS stations, for example-should be put in place to assess their seismic potential.

"For most subduction zones, a $1 million GPS system would be adequate," says Sieh. "This is a small price to pay to assess the level of hazard and to monitor subduction zones with the potential to produce a calamity like the Sumatra-Andaman earthquake and tsunami. Caltech's Tectonics Observatory has, for example, begun to monitor the northern coast of Chile, where a giant earthquake last occurred in 1877."

In addition to Sieh and Avouac, the other authors of the Nature paper are Cecep Subarya of the National Coordinating Agency for Surveys and Mapping in Cibinong, Indonesia; Mohamed Chlieh and Aron Meltzner, both of Caltech's Tectonics Observatory; Linette Prawirodirdjo and Yehuda Bock, both of the Scripps Institution of Oceanography; Danny Natawidjaja of the Indonesian Institute of Sciences; and Robert McCaffrey of Rensselaer Polytechnic Institute.

 

Writer: 
Robert Tindol
Writer: 

Andromeda's Stellar Halo Shows Galaxy's Origin to Be Similar to That of Milky Way

PASADENA, Calif.-For the last decade, astronomers have thought that the Andromeda galaxy, our nearest galactic neighbor, was rather different from the Milky Way. But a group of researchers have determined that the two galaxies are probably quite similar in the way they evolved, at least over their first several billion years.

In an upcoming issue of the Astrophysical Journal, Scott Chapman of the California Institute of Technology, Rodrigo Ibata of the Observatoire de Strasbourg, and their colleagues report that their detailed studies of the motions and metals of nearly 10,000 stars in Andromeda show that the galaxy's stellar halo is "metal-poor." In astronomical parlance, this means that the stars lying in the outer bounds of the galaxy are pretty much lacking in all the elements heavier than hydrogen.

This is surprising, says Chapman, because one of the key differences thought to exist between Andromeda and the Milky Way was that the former's stellar halo was metal-rich and the latter's was metal-poor. If both galaxies are metal-poor, then they must have had very similar evolutions.

"Probably, both galaxies got started within a half billion years of the Big Bang, and over the next three to four billion years, both were building up in the same way by protogalactic fragments containing smaller groups of stars falling into the two dark-matter haloes," Chapman explains.

While no one yet knows what dark matter is made of, its existence is well established because of the mass that must exist in galaxies for their stars to orbit the galactic centers the way they do. Current theories of galactic evolution, in fact, assume that dark-matter wells acted as a sort of "seed" for today's galaxies, with the dark matter pulling in smaller groups of stars as they passed nearby. What's more, galaxies like Andromeda and the Milky Way have each probably gobbled up about 200 smaller galaxies and protogalactic fragments over the last 12 billion years.

Chapman and his colleagues arrived at the conclusion about the metal-poor Andromeda halo by obtaining careful measurements of the speed at which individual stars are coming directly toward or moving directly away from Earth. This measure is called the radial velocity, and can be determined very accurately with the spectrographs of major instruments such as the 10-meter Keck-II telescope, which was used in the study.

Of the approximately 10,000 Andromeda stars for which the researchers have obtained radial velocities, about 1,000 turned out to be stars in the giant stellar halo that extends outward by more than 500,000 light-years. These stars, because of their lack of metals, are thought to have formed quite early, at a time when the massive dark-matter halo had captured its first protogalactic fragments.

The stars that dominate closer to the center of the galaxy, by contrast, are those that formed and merged later, and contain heavier elements due to stellar evolution processes.

In addition to being metal-poor, the stars of the halo follow random orbits and are not in rotation. By contrast, the stars of Andromeda's visible disk are rotating at speeds upwards of 200 kilometers per second.

According to Ibata, the study could lead to new insights on the nature of dark matter. "This is the first time we've been able to obtain a panoramic view of the motions of stars in the halo of a galaxy," says Ibata. "These stars allow us to weigh the dark matter, and determine how it decreases with distance."

In addition to Chapman and Ibata, the other authors are Geraint Lewis of the University of Sydney; Annette Ferguson of the University of Edinburgh; Mike Irwin of the Institute of Astronomy in Cambridge, England; Alan McConnachie of the University of Victoria; and Nial Tanvir of the University of Hertfordshire.

 

 

Writer: 
Robert Tindol
Writer: 

Dust Found in Earth Sediment Traced to Breakup of the Asteroid Veritas 8.2 Million Years Ago

PASADENA, Calif.—In a new study that provides a novel way of looking at our solar system's past, a group of planetary scientists and geochemists announce that they have found evidence on Earth of an asteroid breakup or collision that occurred 8.2 million years ago.

Reporting in the January 19 issue of the journal Nature, scientists from the California Institute of Technology, the Southwest Research Institute (SwRI), and Charles University in the Czech Republic show that core samples from oceanic sediment are consistent with computer simulations of the breakup of a 100-mile-wide body in the asteroid belt between Mars and Jupiter. The larger fragments of this asteroid are still orbiting the asteroid belt, and their hypothetical source has been known for years as the asteroid "Veritas."

Ken Farley of Caltech discovered a spike in a rare isotope known as helium 3 that began 8.2 million years ago and gradually decreased over the next 1.5 million years. This information suggests that Earth must have been dusted with an extraterrestrial source.

"The helium 3 spike found in these sediments is the smoking gun that something quite dramatic happened to the interplanetary dust population 8.2 million years ago," says Farley, the Keck Foundation Professor of Geochemistry at Caltech and chair of the Division of Geological and Planetary Sciences. "It's one of the biggest dust events of the last 80 million years."

Interplanetary dust is composed of bits of rock from a few to several hundred microns in diameter produced by asteroid collisions or ejected from comets. Interplanetary dust migrates toward the sun, and en route some of this dust is captured by the Earth's gravitational field and deposited on its surface.

Presently, more than 20,000 tons of this material accumulates on Earth each year, but the accretion rate should fluctuate with the level of asteroid collisions and changes in the number of active comets. By looking at ancient sediments that include both interplanetary dust and ordinary terrestrial sediment, the researchers for the first time have been able to detect major dust-producing solar system events of the past.

Because interplanetary dust particles are so small and rare in sediment-significantly less than a part per million-they are difficult to detect using direct measurements. However, these particles are extremely rich in helium 3, in comparison with terrestrial materials. Over the last decade, Ken Farley has measured helium 3 concentrations in sediments formed over the last 80 million years to create a record of the interplanetary dust flux.

To assure that the peak was not a fluke present at only one site on the seafloor, Farley studied two different localities: one in the Indian Ocean and one in the Atlantic. The event is recorded clearly at both sites.

To find the source of these particles, William F. Bottke and David Nesvorny of the SwRI Space Studies Department in Boulder, Colorado, along with David Vokrouhlicky of Charles University, studied clusters of asteroid orbits that are likely the consequence of ancient asteroidal collisions.

"While asteroids are constantly crashing into one another in the main asteroid belt," says Bottke, "only once in a great while does an extremely large one shatter."

The scientists identified one cluster of asteroid fragments whose size, age, and remarkably similar orbits made it a likely candidate for the Earth-dusting event. Tracking the orbits of the cluster backwards in time using computer models, they found that, 8.2 million years ago, all of its fragments shared the same orbital orientation in space. This event defines when the 100-mile-wide asteroid called Veritas was blown apart by impact and coincides with the spike in the interplanetary seafloor sediments Farley had found.

"The Veritas disruption was extraordinary," says Nesvorny. "It was the largest asteroid collision to take place in the last 100 million years."

As a final check, the SwRI-Czech team used computer simulations to follow the evolution of dust particles produced by the 100-mile-wide Veritas breakup event. Their work shows that the Veritas event could produce the spike in extraterrestrial dust raining on the Earth 8.2 million years ago as well as a gradual decline in the dust flux.

"The match between our model results and the helium 3 deposits is very compelling," Vokrouhlicky says. "It makes us wonder whether other helium 3 peaks in oceanic cores can also be traced back to asteroid breakups."

This research was funded by NASA's Planetary Geology & Geophysics program and received additional financial support from Czech Republic grant agency and the National Science Foundation's COBASE program. The Nature paper is titled "A late Miocene dust shower from the breakup of an asteroid in the main belt."

 

 

Writer: 
Robert Tindol
Writer: 

Astrophysical Device Will Sniff Out Terrorism

PASADENA, Calif.—Astrophysicists spend most of their time looking for objects in the sky, but 9/11 changed Ryan McLean's orientation.

Right after the terrorist attacks, the Caltech staff scientist began applying his knowledge about detectors that study galaxies to the design of new sensors for detecting radioactive materials near possible terrorist targets. A few months ago, the U.S. Department of Homeland Security awarded McLean the first phase of a $2.2 million contract to develop a radiation-detection module.

"Before 9/11, I had a safe feeling that life was great," says McLean, who came to Caltech in 1999 to work for Professor of Physics Christopher Martin, developing projects in which rockets were launched with instruments that, during their five minutes above the atmosphere, observed the dust and hot gases in the Milky Way. "But I have two young kids, and now I realize that things may not be so stable."

The first part of McLean's project is to create a specialized chip that turns a semiconducting crystal into a detector that can find a radiation source up to 100 meters away and tell whether it's harmful radiation from a dirty bomb, or harmless radiation from, say, a truckload of fertilizer. In the second phase, which could begin by the middle of 2006, he'll build a workable device.

The problem with current detectors is that they are often set off by essentially benign materials. They also tend to be large pieces of equipment located only at the nation's entry points, such as ports.

McLean wants to make detectors that will ignore natural radiation sources like fertilizer and that will also be small and mobile, so that security officers can take them anywhere and target any ship, truck, or building.

McLean, who has also contributed to a project at the Lawrence Livermore National Laboratory (LLNL) to build a radiation detector the size of a cell phone, plans to use a sensor made of cadmium zinc telluride, which has been used in telescopes to detect gamma rays and X rays. The advantage of these crystals is that they work at room temperature, unlike other sensors that work only at very low temperatures.

To accomplish this, McLean teamed with the X-ray/gamma-ray group at Caltech's Space Radiation Laboratory (SRL), which is led by Professor of Physics Fiona Harrison. The SRL has been developing cadmium zinc telluride gamma-ray sensors, as well as custom, low-noise, low-power electronic chips for X-ray and gamma-ray instruments, for more than 10 years. While SRL's efforts have largely focused on developing these sensors for space missions, after 9/11 SRL teamed with LLNL to develop a chip for a handheld radiation monitor for Homeland Security.

Surprisingly, looking for radiation on the ground is not much different from searching for it in space. "What we are doing with Ryan is taking the best of what we developed for the previous Homeland Security device, and combining it with the best of what we developed for our space instruments," says senior SRL engineer Rick Cook. Everything SRL has learned about the pros and cons of the cadmium zinc telluride itself will also be key to making this project a success.

McLean says that he does not expect the project to put Caltech into the antiterrorism radiation-detection business. If his device shows promise, the technology could be licensed to a company that would manufacture a range of detection products at relatively low cost, making widespread use feasible.

"The idea is that if you could have lots of small detectors, you might have a better chance of detecting harmful nuclear material than if you're stationed only at central locations, like bridges and ports," he says.

Given government officials' warnings that it is only a matter of time before the next terrorist attack in the United States, McLean says that there is a lot of pressure to complete the work quickly. "It helps push the project along."

Contact: Mike Rogers (626) 395-6083 mike_rogers@caltech.edu

Writer: 
RT

Quasar Study Provides Insights into Composition of the Stars That Ended the "Dark Ages"

WASHINGTON, D.C.-A team of astronomers has uncovered new evidence about the stars whose formation ended the cosmic "Dark Ages" a few hundred million years after the Big Bang.

In a presentation today at the annual winter meeting of the American Astronomical Society (AAS), California Institute of Technology graduate student George Becker is scheduled to discuss his team's investigation of several faraway quasars and the gas between the quasars and Earth. The paper on which his lecture is based will be published in the Astrophysical Journal in March.

One quasar in the study seems to reveal numerous patches of "neutral" gas, made up of atoms where the nucleus and electrons cling together, floating in space when the universe was only about 10 percent of its present age. This gas is thought to have existed in significant quantities only within a certain time-frame in the early universe. Prior to the Dark Ages, all material would have been too hot for atomic nuclei to combine with their electrons; after, the light from newly-formed stars would have reached the atoms and stripped off the electrons.

"There should have been a period when most of the atoms in the universe were neutral," Becker explains. "This would have continued until stars and galaxies began forming."

In other words, the universe went from a very hot, very dense state following the Big Bang where all atomic nuclei and electrons were too energetic to combine, to a less dense and cooler phase-albeit a dark one-where the nuclei and the electrons were cool enough to hold onto each other and form neutral atoms, to today's universe where the great majority of atoms are ionized by energetic particles of light.

Wallace Sargent, who coined the term Dark Ages in 1985 and who is Becker's supervising professor, adds that analyzing the quasars to learn about the early universe is akin to looking at a lighthouse in order to study the air between you and it. During the Dark Ages, neutral atoms filling the universe would have acted like a fog, blocking out the light from distant objects. To end the Dark Ages, enough stars and galaxies needed to form to burn this "cosmic fog" away.

"We may have detected the last wisps of the fog," explains Sargent, who is Bowen Professor of Astronomy at Caltech.

The uniqueness of the new study is the finding that the chemical elements of the cool, un-ionized gas seem to have come from relatively ordinary stars. The researchers think this is so because the elements they detect in the gas- oxygen, carbon, and silicon-are in proportions that suggest the materials came from Type II supernovae.

These particular explosions are caused when massive stars collapse and then rebound to form a gigantic explosion. The stars needed to create these explosions can be more than ten times the mass of the sun, yet they are common over almost the entire history of the universe.

However, astronomers believe that the very first stars in the universe would have been much more massive, up to hundreds of times the mass of the sun, and would have left behind a very different chemical signature.

"If the first stars in the universe were indeed very massive stars," Becker explains, "then their chemical signature was overwhelmed by smaller, more typical stars very soon after."

Becker and his colleagues believe they are seeing material from stars that was blown into space by the supernovae explosions and mixed with the pristine gas produced by the Big Bang. Specifically, they are looking at the spectra of the light from quasars as it is absorbed during its journey through the mixed-up gas.

The quasars in this particular study are from the Sloan Digital Sky Survey, an ongoing mapping project that seeks, in part, to determine the distances of 100,000 quasars. The researchers focused on nine of the most distant quasars known, with redshifts greater than 5, meaning that the light we see from these objects would have been emitted when the universe was at most 1.2 billion years old.

Of the nine, three are far enough away that they may have been at the edge of the dark period. Those three have redshifts greater than 6, meaning that the universe was less than 1 billion years old when they emitted the light we observe. By comparison, the present age of the universe is believed to be about 13.7 billion years.

Becker says that the study in part promises a new tool to investigate the nature of stars in the early universe. "Now that we've seen these systems, it's reasonable to ask if their composition reflects the output of those first very massive stars, or whether the mix of chemicals is what you would expect from more ordinary stars that ended in Type II supernovae.

"It turns out that the latter is the case," Becker says. "The chemical composition appears to be very ordinary."

Thus, the study provides a new window into possible transitions in the early universe, Sargent adds. "The relative abundance of these elements gives us in principle a way of finding out what the first stars were.

"This gives us insight into what kind of stars ended the Dark Ages."

Observations for this study were performed using the 10-meter (400-inch) Keck I Telescope on Mauna Kea, Hawaii. In addition to Becker and Sargent, the other authors are Michael Rauch of the Carnegie Observatories and Robert A. Simcoe of the MIT Center for Space Research.

This work was supported by the National Science Foundation.

Writer: 
Robert Tindol
Writer: 

Kuiper Belt Moons Are Starting to Seem Typical

WASHINGTON, D.C.-In the not-too-distant past, the planet Pluto was thought to be an odd bird in the outer reaches of the solar system because it has a moon, Charon, that was formed much like Earth's own moon was formed. But Pluto is getting a lot of company these days. Of the four largest objects in the Kuiper belt, three have one or more moons.

"We're now beginning to realize that Pluto is one of a small family of similar objects, nearly all of which have moons in orbit around them," says Antonin Bouchez, a California Institute of Technology astronomer.

Bouchez discussed his work on the Kuiper belt Tuesday, January 10, at the winter meeting of the American Astronomical Society (AAS).

Bouchez says that the puzzle for planetary scientists is that, as a whole, the hundreds of objects now known to inhabit the Kuiper belt beyond the orbit of Neptune have only about an 11 percent chance of possessing their own satellites. But three of the four largest objects now known in the region have satellites, which means that different processes are at work for the large and small bodies.

Experts have been fairly confident for a decade or more that Pluto's moon Charon was formed as the result of an impact, but that the planet seemed unique in this. According to computer models, Pluto was hit by an object roughly one-half its own size, vaporizing some of the planet's material. A large piece, however, was cleaved off nearly intact, forming Pluto's moon Charon.

Earth's moon is thought to have been formed in a similar way, though our moon most likely formed out of a hot disk of material left in orbit after such a violent impact.

Just in the last year, astronomers have discovered two additional moons for Pluto, but the consensus is still that the huge Charon was formed by a glancing blow with another body, and that all three known satellites-as well as anything else not yet spotted from Earth-were built up from the debris.

As for the other Kuiper belt objects, experts at first thought that the bodies acquired their moons only occasionally by snagging them through gravitational capture. For the smaller bodies, the 11 percent figure would be about right.

But the bigger bodies are another story. The biggest of all-and still awaiting designation as the tenth planet-is currently nicknamed "Xena." Discovered by Caltech's Professor of Planetary Science Mike Brown and his associates, Chad Trujillo of the Gemini Observatory and David Rabinowitz of Yale University, Xena is 25 percent larger than Pluto and is known to have at least one moon.

The second-largest Kuiper belt object is Pluto, which has three moons and counting. The third-largest is nicknamed "Santa" because of the time of its discovery by the Mike Brown team, and is known to have two moons.

"Santa is an odd one," says Bouchez. "You normally would expect moons to form in the same plane because they would have accreted from a disk of material in orbit around the main body.

"But Santa's moons are 40 degrees apart. We can't explain it yet."

The fourth-largest Kuiper belt object is nicknamed "Easterbunny"-again, because of the time the Brown team discovered it-and is not yet known to have a moon. But in April, Bouchez and Brown will again be looking at Easterbunny with the adaptive-optics rig on one of the 10-meter Keck telescopes, and a moon might very well turn up.

The search for new planets and other bodies in the Kuiper belt is funded by NASA. For more information on the program, see the Samuel Oschin Telescope's website at http://www.astro.caltech.edu/palomarnew/sot.html

For more information on Mike Brown's research, see http://www.gps.caltech.edu/~mbrown

For more information on the Keck laser-guide-star adaptive optics system, see http://www2.keck.hawaii.edu/optics/lgsao/

 

 

Writer: 
Robert Tindol
Writer: 

Experimental Economists Find Brain Regions That Govern Fear of the Economic Unknown

PASADENA, Calif.—Do you have second thoughts when ordering a strange-sounding dish at an exotic restaurant? Afraid you'll get fricasseed eye of newt, or something even worse? If you do, it's because certain neurons in the brain are saying that the potential reward for the risk is unknown. These regions of the brain have now been pinpointed by experimental economists at the California Institute of Technology and the University of Iowa College of Medicine.

In the December 9 issue of the journal Science, Caltech's Axline Professor of Business Economics Colin Camerer and his colleagues report on a series of experiments involving Caltech student volunteers and patients with specific types of brain damage at the University of Iowa. The object of the experiments was to see how the brain responded to degrees of economic uncertainty by having the test subjects make wagers while being scanned by a functional magnetic resonance imager (fMRI).

The results show that there is a definite difference in the brain when the wagers add a degree of ambiguity to the risk. In cases where the game involves a simple wager in which the chance of getting a payoff is very clearly known, the dorsal striatum tends to light up. But in a nearly identical game in which the chances of winning are unknown, the more emotional parts of the brain known as the amygdala and orbitofrontal cortex (OFC) are involved.

According to Camerer, this is a clear advancement in understanding the neural basis of economic decision making. Much is already known about how people deal with risk from the standpoint of social sciences and behavioral ecology, but greater understanding of how the brain structures are involved provides new insights on how certain behaviors are connected.

"The amygdala has been hypothesized as a generalized vigilance module in the brain," he explains. "We know, for example, that anyone with damage to the amygdala cannot pick up certain facial cues that normally allow humans to know whether they should trust someone else."

Problems with the amygdala are also known to be associated with autism, a brain disorder that causes sufferers to have trouble recognizing emotions in other people's faces. One of the authors of the paper, Ralph Adolphs, the Bren Professor of Psychology and Neuroscience at Caltech, has done extensive work in this area.

As for the OFC, the structure is associated with the integration of emotional and cognitive input. Therefore the OFC and amygdala presumably work together when a person is confronted with a wager for which the odds are unknown-the amygdala sends a "caution" message and the OFC processes the message.

The researchers set up the experiments so that the "risk" games and "ambiguity" games looked similar, to control for activity in the visual system so they could focus only on differences in decision making. In the "risk" games, each test subject was provided an opportunity to either choose a certain amount, like $3, or else choose a card that could be either red or blue. If the card was red, the test subject got $10, but if it came up blue, the test subject got nothing for that particular card.

In the risk games, each test subject was informed that the chance of drawing a red card was 50 percent, that there would be 10 of each color out of the total of 20 cards. Subjects made a series of 24 choices, with different sums of money at risk and different numbers of cards. In the ambiguity games, however, each test subject was told that the deck contained 20 cards, but was told nothing about how many were red and how many were blue.

As predicted from past experiments in which this type of risk was observed in test subjects, the researchers knew that the Caltech subjects with no brain damage would be more likely to draw cards in the risk game than in the ambiguity game, because people dislike betting when they do not know the odds. They were more likely to take sure amounts, which meant that their fear cost them money in expected value terms.

The patients at the University of Iowa Medical School, on the other hand, who had lesions to the OFC, played the game entirely differently. On average, these subjects with damage to the OFC were much more tolerant of risk and ambiguity.

Camerer says that the result with the brain-damaged test subjects fits well with the observation that many have suffered in their personal lives due to reckless financial decisions.

The research also addressed the intensity of the response in the brain as it correlates with degrees of risk. The results for the Caltech students showed more intense activity in the amygdala and OFC when the chance of winning is ambiguous, but there would be no such difference in patients with damage to those areas.

In sum, the results provide an important neurological understanding of how we humans handle risk in the real world, Camerer says.

"If you think about it, how often do you know the probability of success? Probably, the situation we modeled with the risk game is more the exception than the rule," he says. "In most situations, I think you are confronted with a risky choice in which you have little idea of the chances of different payoffs."

Does the study have any applications for society? Camerer says that our knowing what is happening at the most microscopic level in the neurons of the brain could lead to better understanding of bigger social effects. For example, a fear of the economic unknown will also create a strong preference for the familiar. In every country in the world, investors hold too many stocks they are familiar with, from their own countries, and do not diversify their stock holdings enough by buying ambiguous foreign stocks. The opposite of fear of the economic unknown may be driving entrepreneurs, who often thrive under uncertainty.

"It could be that aversion to ambiguity is like a primitive freezing response that we've had for millions of years," Camerer says. "In this case, it would be an economic freezing response."

The study is titled "Neural Systems Responding to Degrees of Uncertainty in Human Decision Making."

In addition to Camerer and Adolphs, the other authors are Ming Hsu and Meghana Bhatt, both graduate students in economics at Caltech; and Daniel Tranel of the University of Iowa College of Medicine.

Writer: 
Robert Tindol
Writer: 

Physicists Achieve Quantum Entanglement Between Remote Ensembles of Atoms

PASADENA, Calif.—Physicists have managed to "entangle" the physical state of a group of atoms with that of another group of atoms across the room. This research represents an important advance relevant to the foundations of quantum mechanics and to quantum information science, including the possibility of scalable quantum networks (i.e., a quantum Internet) in the future.

Reporting in the December 8 issue of the journal Nature, California Institute of Technology physicist H. Jeff Kimble and his colleagues announce the first realization of entanglement for one "spin excitation" stored jointly between two samples of atoms. In the Caltech experiment, the atomic ensembles are located in a pair of apparatuses 2.8 meters apart, with each ensemble composed of about 100,000 individual atoms.

The entanglement generated by the Caltech researchers consisted of a quantum state for which, when one quantum spin (i.e., one quantum bit) flipped for the atoms at the site L of one ensemble, invariably none flipped at the site R of the other ensemble, and when one spin flipped at R, invariably none flipped at L. Yet, remarkably, because of the entanglement, both possibilities existed simultaneously.

According to Kimble, who is the Valentine Professor and professor of physics at Caltech, this research significantly extends laboratory capabilities for entanglement generation, with now-entangled "quantum bits" of matter stored with separation several thousand times greater than was heretofore possible.

Moreover the experiment provides the first example of an entangled state stored in a quantum memory that can be transferred from the memory to another physical system (in this case, from matter to light). Since the work of Schrödinger and Einstein in the 1930s, entanglement has remained one of the most profound aspects and persistent mysteries of quantum theory. Entanglement leads to strong correlations between the various components of a physical system, even if those components are very far apart. Such correlations cannot be explained by classical physics and have been the subject of active experimental investigation for more than 40 years, including pioneering demonstrations that used entangled states of photons, carried out by John Clauser (son of Caltech's Millikan Professor of Engineering, Emeritus, Francis Clauser).

In more recent times, entangled quantum states have emerged as a critical resource for enabling tasks in information science that are otherwise impossible in the classical realm of conventional information processing and distribution. Some tasks in quantum information science (for instance, the implementation of scalable quantum networks) require that entangled states be stored in massive particles, which was first accomplished for trapped ions separated by a few hundred micrometers in experiments at the National Institute of Standards and Technology in Boulder, Colorado, in 1998.

In the Caltech experiment, the entanglement involves "collective atomic spin excitations." To generate such excitations, an ensemble of cold atoms initially all in level "a" of two possible ground levels is addressed with a suitable "writing" laser pulse. For weak excitation with the write laser, one atom in the sample is sometimes transferred to ground level "b," thereby emitting a photon.

Because of the impossibility of determining which particular atom emitted the photon, detection of this first write photon projects the ensemble of atoms into a state with a single collective spin excitation distributed over all the atoms. The presence (one atom in state b) or absence (all atoms in state a) of this symmetrized spin excitation behaves as a single quantum bit.

To generate entanglement between spatially separated ensembles at sites L and R, the write fields emitted at both locations are combined together in a fashion that erases any information about their origin. Under this condition, if a photon is detected, it is impossible in principle to determine from which ensemble's L or R it came, so that both possibilities must be included in the subsequent description of the quantum state of the ensembles.

The resulting quantum state is an entangled state with "1" stored in the L ensemble and "0" in the R ensemble, and vice versa. That is, there exist simultaneously the complimentary possibilities for one spin excitation to be present in level b at site L ("1") and all atoms in the ground level a at site R ("0"), as well as for no spin excitations to be present in level b at site L ("0") and one excitation to be present at site R ("1").

This entangled state can be stored in the atoms for a programmable time, and then transferred into propagating light fields, which had not been possible before now. The Caltech researchers devised a method to determine unambiguously the presence of entanglement for the propagating light fields, and hence for the atomic ensembles.

The Caltech experiment confirms for the first time experimentally that entanglement between two independent, remote, massive quantum objects can be created by quantum interference in the detection of a photon emitted by one of the objects.

In addition to Kimble, the other authors are Chin-Wen Chou, a graduate student in physics; Hugues de Riedmatten, Daniel Felinto, and Sergey Polyakov, all postdoctoral scholars in Kimble's group; and Steven J. van Enk of Bell Labs, Lucent Technologies.

Writer: 
Robert Tindol
Writer: 

Pages