Researchers Determine How Plants Decide Where to Position Their Leaves and Flowers

PASADENA, Calif.—One of the quests of modern biologists is to understand how cells talk to each other in order to determine where to form major organs. An international team of biologists has solved a part of this puzzle by combining state-of-the-art imaging and mathematical modeling to reveal how plants go about positioning their leaves and flowers.

In the January 31 issue of the Proceedings of the National Academy of Sciences (PNAS), researchers from the California Institute of Technology, the University of California at Irvine, and Lund University in Sweden reported their success in determining how a plant hormone known as auxin affects plant organ positioning. Experts already knew that auxin played some role in the development of plant organs, but the new study employs imaging techniques and computer modeling to propose a new theory about how the mechanism works.

The research involves the growing tip of the shoot of the plant Arabidopsis thaliana, a relative of the mustard plant that has been studied intensely by modern biologists. With its simple and very well understood genome, Arabidopsis lends itself to a wide variety of experiments.

The achievement of the researchers is their demonstration of how plant cells, with purely local information about their nearest neighbors' internal concentration of auxin, can communicate to determine the position of new flowers or leaves, which form in a regular pattern, with many cells separating the newly formed primordia (the first traces of an organ or structure). The authors theorize that the template the plant uses to make the larger parts comes from two mechanisms: a polarized transport of auxin into a feedback loop and a dynamic geometry arising from the growth and division of cells.

To capture the development, Beadle Professor of Biology Elliot Meyerowitz, division chair of the biology division at Caltech, and his team used green fluorescent proteins to mark specific cell types in the plant's meristem, the plant tissue in which regulated cell division, pattern formation, and differentiation give rise to plant parts like leaves and flowers.

The marked proteins allowed the group to image the cell's lineages through meristem development and differentiation leading to specific arrangement of leaves and reproductive growth, and also to follow changes in the concentration and movement of auxin.

Although the study applies specifically to the Arabidopsis plant, Meyerowitz says the mechanism is probably similar for other plants and even other biological systems in which patterning occurs in the course of development.

In addition to Meyerowitz, the paper's authors are Henrik Jönsson of Lund University, Marcus G. Heisler of Caltech's Division of Biology, Bruce E. Shapiro of Caltech's Biological Network Modeling Center, and Eric Mjolsness of UC Irvine's Institute of Genomics and Bioinformatics and department of computer science.

 

Writer: 
Robert Tindol
Writer: 

Fault That Produced Largest Aftershock Ever Recorded Still Poses Threat to Sumatra

PASADENA, Calif.—A mere three months after the giant Sumatra-Andaman earthquake and tsunami of December 2004, tragedy struck again when another great earthquake shook the area just to the south, killing over 2,000 Indonesians. Although technically an aftershock of the 2004 event, the 8.7-magnitude Nias-Simeulue earthquake just over a year ago was itself one of the most powerful earthquakes ever recorded. Only six others have had greater magnitudes.

In the March 31 issue of the journal Science, a team of researchers led by Richard Briggs and Kerry Sieh of the California Institute of Technology reconstruct the fault rupture that caused the March 28, 2005, event from detailed measurements of ground displacements. Their analysis shows that the fault broke along a 400-kilometer length, and that the length of the break was limited by unstrained sections of the fault on either end.

The researchers continue to express concern that another section of the great fault, south of the 2005 rupture, is likely to cause a third great earthquake in the not-too-distant future. The surface deformation they observed in the 2005 rupture area may well be similar to what will occur when the section to the south ruptures.

Briggs, a postdoctoral scholar in Caltech's new Tectonics Observatory, and his colleagues determined the vertical displacements of the Sumatran islands that are directly over the deeply buried fault whose rupture generated the 2005 earthquake. The main technique they used was the examination of coral heads growing near the shore. The tops of these heads stay just at the waterline, so if they move higher or lower, it indicates that there has been uplift or subsidence.

The researchers also obtained data on ground displacements from GPS stations that they had rushed into place after the 2004 earthquake. "We were fortunate to have installed the geodetic instruments right above the part that broke," says Kerry Sieh, who leads the Sumatran project of Caltech's Tectonics Observatory. "This is the closest we've ever gotten to such a large earthquake with continuously recording GPS instruments."

From the coral and GPS measurements, the researchers found that the 2005 earthquake was associated with uplift of up to three meters over a 400-kilometer stretch of the Sunda megathrust, the giant fault where Southeast Asia is overriding the Indian and Australian plates. This stretch lies to the south of the 1600-kilometer section of the fault that ruptured in 2004.

Actual slippage on the megathrust surface (about 25 kilometers below the islands) was over 11 meters. The data permitted calculation of the earthquake's magnitude at 8.6, nearly the same as estimates based on seismological recordings.

Most of the deaths in the 2005 earthquake were the direct result of shaking and the collapse of buildings. The earthquake did not trigger a disastrous tsunami comparable to the one that followed the 2004 event. In part, this was because the 2005 rupture was smaller-about one-quarter the length and one-half the slip.

In addition, the largest uplift lay under offshore islands, where there was no water to be displaced. Finally, by rising during the earthquake, the islands gained some instant, extra protection for when the tsunami reached them tens of minutes later.

The scientists were surprised to find that the southern end of the 2004 rupture and the northern end of the 2005 rupture did not quite abut each other, but were separated by a short segment under the island of Simeulue on which the amount of slip was nearly zero. They infer that this segment had not accumulated enough strain to rupture during either event-perhaps, they speculate, because it slips frequently and therefore relieves strain without generating large earthquakes.

Thus, this segment might act as a barrier to rupture propagation. A similar 170-kilometer "creeping" section of the San Andreas fault, between San Francisco and Los Angeles, separates the long section that produced Northern California's great 1906 earthquake from the long section that ruptured during Southern California's great 1857 earthquake.

The southern end of the 2005 rupture was at another short "creeping" segment or weak patch. "Both ends of the 2005 rupture seem to have been at the edges of a weak patch," Sieh explains. The 2005 event therefore probably represents a "characteristic earthquake" that has recurred often over geological time. In fact, old historical records suggest that a very similar earthquake was caused by a rupture of this segment in 1861.

Sieh suggests that installation of GPS instruments along the world's other subduction megathrusts could help more clearly to define those sections that creep stably versus the segments that are locked and thus more likely to break in infrequent, but potentially devastating, ruptures.

Previous work by the Caltech group and their Indonesian colleagues has shown that south of the southern creeping segment lies another locked segment, about 600 kilometers long, which has not broken since a magnitude 9.0 earthquake in 1833. Corals and coastlines along the southern segment record decades of continual, pronounced subsidence, similar to the behavior of the northern region prior to its abrupt uplift during the 2005 fault rupture.

"This southern part is very likely about ready to go again," Sieh says. "It could devastate the coastal communities of southwestern Sumatra, including the cities of Padang and Bengkulu, with a combined population of well over a million people. It could happen tomorrow, or it could happen 30 years from now, but I'd be surprised if it were delayed much beyond that."

Sieh and his colleagues are engaged in efforts to increase public awareness and preparedness for future great earthquakes and tsunamis in Sumatra.

The Science paper is titled "Deformation and slip along the Sunda megathrust in the great 2005 Nias-Simeulue earthquake." The other authors are Aron Meltzner, John Galetzka, Ya-ju Hsu, Mark Simons, and Jean-Philippe Avouac, all at Caltech's Tectonics Observatory; Danny Natawidjaja, Bambang Suwargadi, Nugroho Hananto, and Dudi Prayudi, all at the Indonesian Institute of Sciences; Imam Suprihanto of Jakarta; and Linette Prawirodirdjo and Yehuda Bock at the Scripps Institution of Oceanography.

The research was funded by the Gordon and Betty Moore Foundation, the National Science Foundation, and NASA.

Writer: 
Robert Tindol
Writer: 

Neuroscientists Discover the Neurons That Act As Novelty Detectors in the Human Brain

PASADENA, Calif.—By studying epileptic patients awaiting brain surgery, neuroscientists for the first time have located single neurons that are involved in recognizing whether a stimulus is new or old. The discovery demonstrates that the human brain not only has neurons for processing new information never seen before, but also neurons to recognize old information that has been seen just once.

In the March 16 issue of the journal Neuron, researchers from the California Institute of Technology, the Howard Hughes Medical Institute, and the Huntington Memorial Hospital report their success in distinguishing single-trial learning events from novel stimuli in six patients awaiting surgery for drug-resistant epileptic seizures. As part of the preparation for surgery, the patients have had electrodes implanted in their medial temporal lobes. Inserting small additional wires inside the clinical electrodes provides a way for researchers to observe the firing of individual human brain cells.

According to lead author Ueli Rutishauser, a graduate student in the computation and neural systems program at Caltech, the neurons are located in the hippocampus and amygdala, two limbic brain structures located deeply in the brain. Both regions are known to be important for learning and memory, but neuroscientists had never been able to establish the role of individual brain cells during single-trial learning until now.

"This is an unprecedented look at single-trial learning," explains Rutishauser, who works in the lab of Erin Schuman, a Caltech professor of biology and senior author of the paper. "It shows that single-trial learning is observable at the single-cell level. We've suspected it for a long time, but it has proven difficult to conduct these experiments with laboratory animals because you can't ask the animal whether it has seen something only once—500 times, yes, but not once."

With the patients volunteering to do perceptual studies while their brain activity is being recorded, however, such experiments are entirely possible. For the study, the researchers showed the six volunteers 12 different visual images, each presented once and randomly in one of four quadrants on a computer screen. Each subject was instructed to remember both the identity and position of the image or images presented.

After a 30-minute or 24-hour delay, each subject was shown previously viewed images or new images presented at the center of the screen, and asked whether each image was new or old. For each image identified as familiar, the subject was also asked to identify the quadrant in which the stimulus was originally presented.

The six subjects correctly recognized nearly 90 percent of the images they had already seen, but were less able to correctly recall the quadrant location in which the images had originally appeared. The researchers identified individual neurons that increased their firing rate either for novel stimuli or for familiar stimuli, but not both. These neurons thus responded differently to the same stimulus, depending on whether it was seen the first or the second time.

The fact that certain individual neurons of patients can be shown to fire only for recognition of something seen before, in fact, demonstrates that there is a "familiarity detector" neuron that explains why a person can have a feeling he or she has seen a face sometime in the past. Further, these neurons continue to fire and signal the familiarity of a stimulus, even when the subject mistakenly reports that the stimulus is new.

This type of neuron can account for subconscious recollections. "Even if the patients think they haven't seen the stimulus, their neurons still indicate that they have," Rutishauser says.

The third author of the paper is Adam Mamelak, who is a neurosurgeon at the Huntington Memorial Hospital and the Maxine Dunitz Neurosurgical Institute at Cedars-Sinai Medical Center.

Schuman is professor of biology and executive officer for neurobiology at Caltech and an investigator with the Howard Hughes Medical Institute.

 

Writer: 
Robert Tindol
Writer: 

Astronomers Discover a River of Stars Streaming Across the Northern Sky

PASADENA, Calif.—Astronomers have discovered a narrow stream of stars extending at least 45 degrees across the northern sky. The stream is about 76,000 light-years distant from Earth and forms a giant arc over the disk of the Milky Way galaxy.

In the March issue of the Astrophysical Journal Letters, Carl Grillmair, an associate research scientist at the California Institute of Technology's Spitzer Science Center, and Roberta Johnson, a graduate student at California State University Long Beach, report on the discovery.

"We were blown away by just how long this thing is," says Grillmair. "As one end of the stream clears the horizon this evening, the other will already be halfway up the sky."

The stream begins just south of the bowl of the Big Dipper and continues in an almost straight line to a point about 12 degrees east of the bright star Arcturus in the constellation Bootes. The stream emanates from a cluster of about 50,000 stars known as NGC 5466.

The newly discovered stream extends both ahead and behind NGC 5466 in its orbit around the galaxy. This is due to a process called tidal stripping, which results when the force of the Milky Way's gravity is markedly different from one side of the cluster to the other. This tends to stretch the cluster, which is normally almost spherical, along a line pointing towards the galactic center.

At some point, particularly when its orbit takes it close to the galactic center, the cluster can no longer hang onto its most outlying stars, and these stars drift off into orbits of their own. The lost stars that find themselves between the cluster and the galactic center begin to move slowly ahead of the cluster in its orbit, while the stars that drift outwards, away from the galactic center, fall slowly behind.

Ocean tides are caused by exactly the same phenomenon, though in this case it's the difference in the moon's gravity from one side of Earth to the other that stretches the oceans. If the gravity at the surface of Earth were very much weaker, then the oceans would be pulled from the planet, just like the stars in NGC 5466's stream.

Despite its size, the stream has never previously been seen because it is so completely overwhelmed by the vast sea of foreground stars that make up the disk of the Milky Way. Grillmair and Johnson found the stream by examining the colors and brightnesses of more than nine million stars in the Sloan Digital Sky Survey public database.

"It turns out that, because they were all born at the same time and are situated at roughly the same distance, the stars in globular clusters have a fairly unique signature when you look at how their colors and brightnesses are distributed," says Grillmair.

Using a technique called matched filtering, Grillmair and Johnson assigned to each star a probability that it might once have belonged to NGC 5466. By looking at the distribution of these probabilities across the sky, "the stream just sort of reached out and smacked us.

"The new stream may be even longer than we know, as we are limited at the southern end by the extent of the currently available data," he adds. "Larger surveys in the future should be able to extend the known length of the stream substantially, possibly even right around the whole sky."

The stars that make up the stream are much too faint to be seen by the unaided human eye. Owing to the vast distances involved, they are about three million times fainter than even the faintest stars that we can see on a clear night.

Grillmair says that such discoveries are important for our understanding of what makes up the Milky Way galaxy. Like earthbound rivers, such tidal streams can tell us which way is "down," how steep is the slope, and where the mountains and valleys are located.

By measuring the positions and velocities of the stars in these streams, astronomers hope to determine how much "dark matter" the Milky Way contains, and whether the dark matter is distributed smoothly, or in enormous orbiting chunks.

Writer: 
Robert Tindol
Writer: 

Caltech Scientists Discover the Part of the Brain That Causes Some People to Be Lousy in Math

PASADENA, Calif.—Most everyone knows that the term "dyslexia" refers to people who can't keep words and letters straight. A rarer term is "dyscalculia," which describes someone who is virtually unable to deal with numbers, much less do complicated math.

Scientists now have discovered the area of the brain linked to dyscalculia, demonstrating that there is a specific part of the brain essential for counting properly. In a report published in the March 13 issue of the Proceedings of the National Academy of Sciences (PNAS), researchers explain that the area of the brain known as the intraparietal sulcus (IPS), located toward the top and back of the brain and across both lobes, is crucial for the proper processing of numerical information.

According to Fulvia Castelli, a postdoctoral researcher at the California Institute of Technology and lead author of the paper, the IPS has been known for years as the brain area that allows humans to conceive of numbers. But she and her coauthors from University College London demonstrate that the IPS specifically determines how many things are perceived, as opposed to how much.

To explain how intimately the two different modes of thinking are connected, Castelli says to think about what happens when a person is approaching the checkout lines at the local Trader Joe's. Most of us are impatient sorts, so we typically head for the shortest line.

"Imagine how you really pick the shortest checkout line," says Castelli. "You could count the number of shoppers in each line, in which case you'd be thinking discretely in terms of numerosity.

"But if you're a hurried shopper, you probably take a quick glance at each line and pick the one that seems the shortest. In this case you're thinking in terms of continuous quantity."

The two modes of thinking are so similar, in fact, that scientists have had trouble isolating specific networks within the IPS because it is very difficult to distinguish between responses of how many and how much. To get at the difference between the two forms of quantity processing, Castelli and her colleagues devised a test in which subjects performed quick estimations of quantity while under functional MRI scans.

Specifically, the researchers showed subjects a series of blue and green flashes of light or a chessboard with blue and green rectangles. The subjects were asked to judge whether they saw more green or more blue, and their brain activity was monitored while they did so.

The results show that while subjects are exposed to the separate colors, the brain automatically counts how many objects are present. However, when subjects are presented with either a continuous blue and green light or a blurred chessboard on which the single squares are no longer visible, the brain does not count the objects, but instead estimates how much blue and green is visible.

"We think this identifies the brain activity specific to estimating the number of things," Castelli says. "This is probably also a brain network that underlies arithmetic, and when it's abnormal, may be responsible for dyscalculia."

In other words, dyscalculia arises because a person cannot develop adequate representations of how many things there are.

"Of course, dyscalculics can learn to count," Castelli explains. "But where most people can immediately tell that nine is bigger than seven, anyone with dcyscalculia may have to count the objects to be sure.

"Similarly, dyscalculics are much slower than people in general when they have to say how many objects there are in a set," she adds. "This affects everyday life, from the time when a child is struggling to keep up with arithmetic lessons in school to the time when an adult is trying to deal with money."

The good news is that the work of Castelli and her colleagues could lead to better tools for assessing whether a learning technique for people with dyscalculia is actually working. "Now that we have identified the brain system that carries out this function, we are in a position to see how dyscalculic brain activities differ from a normal brain," Castelli says.

"We should be in a position to measure whether an intervention is changing the brain function so that it becomes more like the normal pattern."

The article is titled "Discrete and analogue quantity processing in the parietal lobe: A functional MRI study." Castelli's coauthors are Daniel E. Glaser and Brian Butterworth, both researchers at the Institute of Cognitive Neuroscience at University College London.

Writer: 
Robert Tindol
Writer: 

Caltech Scientist Creates New Method for Folding Strands of DNA to Make Microscopic Structures

PASADENA, Calif.—In a new development in nanotechnology, a researcher at the California Institute of Technology has devised a way of weaving DNA strands into any desired two-dimensional shape or figure, which he calls "DNA origami."

According to Paul Rothemund, a senior research fellow in computer science and computation and neural systems, the new technique could be an important tool in the creation of new nanodevices, that is, devices whose measurements are a few billionths of a meter in size.

"The construction of custom DNA origami is so simple that the method should make it much easier for scientists from diverse fields to create and study the complex nanostructures they might want," Rothemund explains.

"A physicist, for example, might attach nano-sized semiconductor 'quantum dots' in a pattern that creates a quantum computer. A biologist might use DNA origami to take proteins which normally occur separately in nature, and organize them into a multi-enzyme factory that hands a chemical product from one enzyme machine to the next in the manner of an assembly line."

Reporting in the March 16th issue of Nature, Rothemund describes how long single strands of DNA can be folded back and forth, tracing a mazelike path, to form a scaffold that fills up the outline of any desired shape. To hold the scaffold in place, 200 or more DNA strands are designed to bind the scaffold and staple it together.

Each of the short DNA strands can act something like a pixel in a computer image, resulting in a shape that can bear a complex pattern, such as words or images. The resulting shapes and patterns are each about 100 nanometers in diameter-or about a thousand times smaller than the diameter of a human hair. The dots themselves are six nanometers in diameter. While the folding of DNA into shapes that have nothing to do with the molecule's genetic information is not a new idea, Rothemund's efforts provide a general way to quickly and easily create any shape. In the last year, Rothemund has created half a dozen shapes, including a square, a triangle, a five-pointed star, and a smiley face-each one several times more complex than any previously constructed DNA objects. "At this point, high-school students could use the design program to create whatever shape they desired,'' he says.

Once a shape has been created, adding a pattern to it is particularly easy, taking just a couple of hours for any desired pattern. As a demonstration, Rothemund has spelled out the letters "DNA," and has drawn a rough picture of a double helix, as well as a map of the western hemisphere in which one nanometer represents 200 kilometers.

Although Rothemund has hitherto worked on two-dimensional shapes and structures, he says that 3-D assemblies should be no problem. In fact, researchers at other institutions are already using his method to attempt the building of 3-D cages. One biomedical application that Rothemund says could come of this particular effort is the construction of cages that would sequester enzymes until they were ready for use in turning other proteins on or off.

The original idea for using DNA to create shapes and structures came from Nadrian Seeman of New York University. Another pioneer in the field is Caltech's Assistant Professor of Computer Science and Computation and Neural Systems Erik Winfree, in whose group Rothemund works.

"In this research, Paul has scored a few unusual `firsts' for humanity," Winfree says. "In a typical reaction, he can make about 50 billion 'smiley-faces.' I think this is the most concentrated happiness ever created.

"But the applications of this technology are likely to be less whimsical," Winfree adds. "For example, it can be used as a 'nanobreadboard' for attaching almost arbitrary nanometer-scale components. There are few other ways to obtain such precise control over the arrangement of components at this scale."

The title of the Nature paper is "Folding DNA to create nanoscale shapes and patterns."

Writer: 
Robert Tindol
Writer: 

Researchers Create New "Matchmaking Service" Computer System to Study Gene Interactions

PASADENA, Calif.—Biologists in recent years have identified every individual gene in the genomes of several organisms. While this has been quite an accomplishment in itself, the further goal of figuring out how these genes interact is truly daunting.

The difficulty lies in the fact that two genes can pair up in a gigantic number of ways. If an organism has a genome of 20,000 genes, for example, the total number of pairwise combinations is a staggering total of 200 million possible interactions.

Researchers can indeed perform experiments to see what happens when the two genes interact, but 200 million is an enormous number of experiments, says Weiwei Zhong, a postdoctoral scholar at the California Institute of Technology. "The question is whether we can prioritize which experiments we should do in order to save a lot of time."

To get at this issue, Zhong and her supervising professor, Paul Sternberg, have derived a method of database-mining to make predictions about genetic interactions. In the current issue of the journal Science, they report on a procedure for computationally integrating several sources of data from several organisms to study the tiny worm C. elegans, or nematode, an animal commonly used in biological experiments.

This is possible because various organisms have a large number of genes in common. Humans and nematodes, for example, are similar in 40 percent of their genes. Therefore, a genetic-interaction network provides a faster and better way at determining how certain genes interact. Such a network also provides information about whether anyone has ever done an experiment to determine the interaction of two particular genes in one of several species.

"This process works like a matchmaking service for the genes," says Zhong. "It provides you with candidate matches that most likely will be interacting genes, based upon a number of specified features."

The benefit, she adds, is that biologists do not need to do a huge number of random experiments to verify if two genes indeed interact. Therefore, instead of the experimenter having to run 20,000 experiments to see if two genes randomly chosen from the genome of a 20,000-gene organism interact, they might get by with 10 to 50 experiments.

"The beneft is that you can be through in a month instead of years," says Sternberg. "Also, you can do experiments that are careful and detailed, which may take a day, and still be finished in a month."

To build the computational system, the researchers constructed a "training set" for pairs of nematode gene interactions. The "positives" for genetic interactions were taken from 4,775 known pairwise interactions from nematodes.

By "training" the system, Zhong and Sternberg arrived at a way to rapidly arrive at predictions of whether two genes would interact or not.

According to Sternberg, who is the Morgan Professor of Biology at Caltech, the results show that the data-mining procedure works. Also, the results demonstrate that the federal money spent on sequencing genomes-and the comparatively modest expenditures that have gone toward the improvement of biological data processing-have been dollars well spent.

"This is one of a suite of tools and methods people are coming up with to get more bang for the buck," he says.

In particular, Sternberg and Zhong cite the ongoing WormBase project, now in its sixth year as a database funded by the National Institutes of Health for understanding gene interactions of nematodes. WormBase received $12 million in new funding in 2003, and the project is already leading to new database tools ultimately aimed at promoting knowledge of how genes interrelate.

The new study by Zhong and Sternberg is not directly a product of WormBase, but nevertheless mines data from that and other sources. In fact, the study compiles data from several model organisms to reconstruct a gene-interaction network for the nematode.

Zhong says that the system is not perfect yet, because "false negatives" can still arise if the information is simply not in the database, or if the computer fails to recognize two genes as orthologs (i.e., essentially the same gene). "But it will get better," she adds.

"Choosing how to combine these data is the big deal, not the computational ability of the hardware," says Sternberg. "You can also see how the computer made the call of whether two genes should interact. So it's not a black box, but all transparent; and to biologists, that's really valuable. And finally, it's in the public domain."

Finally, the system provides a good window into the manner in which the biology of the future is emerging, Sternberg says. Zhong, for example, has a doctorate in biology and a master's in computer science: she spends about as much time working on computer databases as she does in the lab with the organisms themselves.

"This is the new generation of biologists," Sternberg says.

The study is titled "Genome-wide Prediction of C. elegans Genetic Interactions," and is published in the March 10 issue of Science.

Writer: 
Robert Tindol
Writer: 

Caltech Scientists Gain Fundamental Insight into How Cells Protect Genetic Blueprints

PASADENA, Calif.—Molecular biologists have known for some time that there is a so-called checkpoint control mechanism that keeps our cells from dividing until they have copied all the DNA in their genetic code. Similar mechanisms prevent cells from dividing with damaged DNA, which forms, for example, in one's skin after a sunburn. Without such genetic fidelity mechanisms, cells would divide with missing or defective genes.

Now, a California Institute of Technology team has uncovered new details of how these checkpoints work at the molecular level.

Reporting in the March 10 issue of the journal Cell, Caltech senior research associate Akiko Kumagai and her colleagues show that a protein with the unusual name "TopBP1" is responsible for activating the cascade of reactions that prohibit cells from dividing with corrupted genetic blueprints. The researchers say that their result is a key molecular insight, and could possibly lead to molecular breakthroughs in cancer therapy someday.

"The function of the checkpoint control mechanisms is to preserve the integrity of the genome," says William Dunphy, the corresponding author of the paper and a professor of biology at Caltech. "When these genetic fidelity mechanisms do not function properly, it can lead to cancer and ultimately death."

The research began with a study of a protein called ATR that was known to be a key regulator of checkpoint responses. This protein is a vital component of every eukaryotic cell (in other words, the cells of most organisms on Earth excluding bacteria). ATR is a "kinase," an enzyme that controls other proteins by modifying them with phosphate groups.

However, no one knew how the cell turns on this enzymatic activity of ATR when needed. To figure out how ATR gets activated in protecting against mutations has been one of the most urgent questions of the field for the past decade.

Acting on a hunch, the researchers decided to look at the TopBP1 protein, whose molecular function was hitherto mysterious. Strikingly, the team found that purified TopBP1 could bind directly to ATR and activate it. The activation was so quick and robust that the researchers knew immediately that they had found the long-sought activator of ATR and deciphered how cells mobilize their efforts to prevent mutations. Interestingly, the researchers found that only a small part of TopBP1 is necessary for activating ATR.

The researchers suspect that the remaining parts of TopBP1 hold additional secrets about checkpoint control mechanisms. Dunphy says that this molecular insight shows how a cancer-repressive mechanism works in a healthy cell. "Knowing how the normal system works might also help lead to insight on how to fix the system when it gets broken," he adds.

In addition to Kumagai and Dunphy, the other authors of the Cell paper are Joon Lee and Hae Yong Yoo, both senior research fellows at Caltech.

Writer: 
Robert Tindol
Writer: 

Old-World Primates Evolved Color Vision to Better See Each Other Blush, Study Reveals

PASADENA, Calif.—Your emotions can easily be read by others when you blush—at least by others familiar with your skin color. What's more, the blood rushing out of your face when you're terrified is just as telling. And when it comes to our evolutionary cousins the chimpanzees, they not only can see color changes in each other's faces, but in each other's rumps as well.

Now, a team of California Institute of Technology researchers has published a paper suggesting that we primates evolved our particular brand of color vision so that we could subtly discriminate slight changes in skin tone due to blushing and blanching. The work may answer a long-standing question about why trichromat vision (that is, color via three cone receptors) evolved in the first place in primates.

"For a hundred years, we've thought that color vision was for finding the right fruit to eat when it was ripe," says Mark Changizi, a theoretical neurobiologist and postdoctoral researcher at Caltech. "But if you look at the variety of diets of all the primates having trichromat vision, the evidence is not overwhelming."

Reporting in the current issue of the journal Biology Letters, Changizi and his coauthors show that our color cones are optimized to be sensitive to subtle changes in skin tone due to varying amounts of oxygenated hemoglobin in the blood.

The spectral sensitivity of the color cones is somewhat odd, Changizi says. Bees, for example, have four color cones that are evenly spread across the visible spectrum, with the high-frequency end extending into the ultraviolet. Birds have three color cones that are also evenly distributed in the visible spectrum.

The old-world primates, by contrast, have an "S" cone at about 440 nanometers (the wavelength of visible light roughly corresponding to blue light), an "M" cone sensitive at slightly less than 550 nanometers, and an "L" cone sensitive at slightly above 550 nanometers.

"This seems like a bad idea to have two cones so close together," Changizi says. "But it turns out that the closeness of the M and L cone sensitivities allows for an additional dimension of sensitivity to spectral modulation. Also, their spacing maximizes sensitivity for discriminating variations in blood oxygen saturation." As a result, a very slight lowering or rising of the oxygen in the blood is easily discriminated by any primate with this type of cone arrangement.

In fact, trichromat vision is sensitive not only for the perception of these subtle changes in color, but also for the perception of the absence or presence of blood. As a result, primates with trichromat vision are not only able to tell if a potential partner is having a rush of emotion due to the anticipation of mating, but also if an enemy's blood has drained out of his face due to fear.

"Also, ecologically, when you're more oxygenated, you're in better shape," Changizi adds, explaining that a naturally rosy complexion might be a positive thing for purposes of courtship.

Adding to the confidence of the hypothesis is the fact that the old-world trichromats tend to be bare-faced and bare-butted as well. "There's no sense in being able to see the slight color variations in skin if you can't see the skin," Changizi says. "And what we find is that the trichromats have bare spots on their faces, while the dichromats have furry faces."

"This could connect up with why we're the 'naked ape,'" he concludes. The few human spots that are not capable of signaling, because they are in secluded regions, tend to be hairy-such as the top of the head, the armpits, and the crotch. And when the groin occasionally does tend to exhibit bare skin, it occurs in circumstances in which a potential mate may be able to see that region.

"Our speculation is that the newly bare spots are for color signaling."

The other authors of the paper are Shinsuke Shimojo, a professor of biology at Caltech who specializes in psychophysics; and Qiong Zhang, an undergraduate at Caltech.

 

 

Writer: 
Robert Tindol
Writer: 

Study of 2004 Tsunami Disaster Forces Rethinking of Theory of Giant Earthquakes

PASADENA, Calif.—The Sumatra-Andaman earthquake of December 26, 2004, was one of the worst natural disasters in recent memory, mostly on account of the devastating tsunami that followed it. A group of geologists and geophysicists, including scientists at the California Institute of Technology, has delineated the full dimensions of the fault rupture that caused the earthquake.

Their findings, reported in the March 2 issue of the journal Nature, suggest that previous ideas about where giant earthquakes are likely to occur need to be revised. Regions of the earth previously thought to be immune to such events may actually be at high risk of experiencing them.

Like all giant earthquakes, the 2004 event occurred on a subduction megathrust-in this case, the Sunda megathrust, a giant earthquake fault, along which the Indian and Australian tectonic plates are diving beneath the margin of southeast Asia. The fault surface that ruptured cannot be seen directly because it lies several kilometers deep in the Earth's crust, largely beneath the sea.

Nevertheless, the rupture of the fault caused movements at the surface as long-accumulating elastic strain was suddenly released. The researchers measured these surface motions by three different techniques. In one, they measured the shift in position of GPS stations whose locations had been accurately determined prior to the earthquake.

In the second method, they studied giant coral heads on island reefs: the top surfaces of these corals normally lie right at the water surface, so the presence of corals with tops above or below the water level indicated that the Earth's crust rose or fell by that amount during the earthquake.

Finally, the researchers compared satellite images of island lagoons and reefs taken before and after the earthquake: changes in the color of the seawater or reefs indicated a change in the water's depth and hence a rise or fall of the crust at that location.

On the basis of these measurements the researchers found that the 2004 earthquake was caused by rupture of a 1,600-kilometer-long stretch of the megathrust-by far the longest of any recorded earthquake. The breadth of the contact surface that ruptured ranged up to 150 kilometers. Over this huge contact area, the surfaces of the two plates slid against each other by up to 18 meters.

On the basis of these data, the researchers calculated that the so-called moment-magnitude of the earthquake (a measure of the total energy released) was 9.15, making it the third largest earthquake of the past 100 years and the largest yet recorded in the few decades of modern instrumentation.

"This earthquake didn't just break all the records, it also broke some of the rules," says Kerry Sieh, who is the Sharp Professor of Geology at Caltech and one of the authors of the Nature paper.

According to previous understanding, subduction megathrusts can only produce giant earthquakes if the oceanic plate is young and buoyant, so that it locks tightly against the overriding continental plate and resists rupture until an enormous amount of strain has accumulated.

Another commonly accepted idea is that the rate of relative motion between the colliding plates must be high for a giant earthquake to occur. Both these conditions are true off the southern coast of Chile, where the largest earthquake of the past century occurred in 1960. They are also true off the Pacific Northwest of the United States, where a giant earthquake occurred in 1700 and where another may occur before long.

But at the site of the 2004 Sumatra-Andaman earthquake the oceanic crust is old and dense, and the relative motion between the plates is quite slow. Yet another factor that should have lessened the likelihood of a giant earthquake in the Indian Ocean is the fact that the oceanic crust is being stretched by formation of a so-called back-arc basin off the continental margin.

"For all these reasons, received wisdom said that the giant 2004 earthquake should not have occurred," says Jean-Philippe Avouac, a Caltech professor of geology, who is also a contributor to the paper. "But it did, so received wisdom must be wrong. It may be, for example, that a slow rate of motion between the plates simply causes the giant earthquakes to occur less often, so we didn't happen to have seen any in recent times-until 2004."

Many subduction zones that were not considered to be at risk of causing giant earthquakes may need to be reassessed as a result of the 2004 disaster. "For example, the Ryukyu Islands between Taiwan and Japan are in an area where a large rupture would probably cause a tsunami that would kill a lot of people along the Chinese coast," says Sieh.

"And in the Caribbean, it could well be an error to assume that the entire subduction zone from Trinidad to Barbados and Puerto Rico is aseismic. The message of the 2004 earthquake to the world is that you shouldn't assume that your subduction zone, even though it's quiet, is incapable of generating great earthquakes."

According to Sieh, it's not that all subduction zones should now be assigned a high risk of giant earthquakes, but that better monitoring systems-networks of continuously recording GPS stations, for example-should be put in place to assess their seismic potential.

"For most subduction zones, a $1 million GPS system would be adequate," says Sieh. "This is a small price to pay to assess the level of hazard and to monitor subduction zones with the potential to produce a calamity like the Sumatra-Andaman earthquake and tsunami. Caltech's Tectonics Observatory has, for example, begun to monitor the northern coast of Chile, where a giant earthquake last occurred in 1877."

In addition to Sieh and Avouac, the other authors of the Nature paper are Cecep Subarya of the National Coordinating Agency for Surveys and Mapping in Cibinong, Indonesia; Mohamed Chlieh and Aron Meltzner, both of Caltech's Tectonics Observatory; Linette Prawirodirdjo and Yehuda Bock, both of the Scripps Institution of Oceanography; Danny Natawidjaja of the Indonesian Institute of Sciences; and Robert McCaffrey of Rensselaer Polytechnic Institute.

 

Writer: 
Robert Tindol
Writer: 

Pages