Caltech Researchers Create New Proteins by Recombining the Pieces of Existing Proteins

PASADENA, Calif.—An ongoing challenge in biochemistry is getting a handle on protein folding-that is, the way that DNA sequences determine the unique structure and functions of proteins, which then act as "biology's workhorses." Gaining mastery over the construction of proteins will someday lead to breakthroughs in medicine and pharmaceuticals.

One method for studying the determinants of a protein's structure and function is to analyze numerous proteins with similar structure and function-a protein family-as a group. By studying families of natural proteins, researchers can tease out many of the fundamental interactions responsible for a given property.

A team of chemical engineers, chemists, and biochemists at the California Institute of Technology have now managed to create a large number of proteins that are very different in sequence yet retain similar structures. The scientists use computational tools to analyze protein structures and pinpoint locations at which they can break them apart and then reassemble them, like Lego pieces. Each new construction is a protein with new functions and new potential enzyme actions.

Reporting in the April 10 issue of the Public Library of Science Biology, Caltech graduate student Christopher Otey and his colleagues show that they have successfully taken three proteins from nature, broken them each into eight pieces, and successfully reconstructed the pieces to form many new proteins. According to Otey, the potential number of new proteins from just three proteins is three raised to the eighth power, or 6,561, assuming that each protein is divided into eight segments. "The result is an artificial protein family," Otey explains. "In this single experiment, we've been able to make about 3,000 new proteins."

About half of the 6,561 proteins are viable, having an average of about 72 sequence changes. "The benefit is that you can use the new proteins and new sequence information to learn new things about the original proteins," Otey adds. "For example, if a certain protein function depends on one amino acid that never changes, then the protein apparently must have that particular amino acid."

The proteins the team has been using are called cytochromes P450, which play critical roles in drug metabolism, hormone synthesis, and the biodegradation of many chemicals. Using computational techniques, the researchers predict how to break up this roughly 460-amino-acid protein into individual blocks of about 60 to 70 amino acids.

Otey says that this is an important result when considering the old-fashioned way of obtaining protein sequences. Whereas, over the past 40 years, researchers have fully determined 4,500 natural P450 sequences, the Caltech team required only a few months to create 3,000 additional new sequences.

"Our goal in the lab is to be able to create a bunch of proteins very quickly," Otey says, "but the overall benefit is an understanding of what makes a protein do what it does and potentially the production of new pharmaceuticals, new antibiotics, and such.

"During evolution, nature conserves protein structure, which we do with the computational tools, while changing protein sequence which can lead to proteins with new functions," he says. "And new functions can ultimately result in new treatments."

In addition to Otey, the other authors of the paper are Frances Arnold (the corresponding author), who is Dickinson Professor of Chemical Engineering and Biochemistry at Caltech, and Otey's supervising professor; Marco Landwehr, a postdoctoral scholar in biochemistry; Jeffrey B. Endelman, a recently graduated Caltech graduate student in bioengineering; Jesse Bloom, a graduate student in chemistry; and Kaori Hiraga, a Caltech postdoctoral scholar who is now at the New York State Department of Health.

The title of the article is "Structure-Guided Recombination Creates an Artificial Family of Cytochromes P450."

 

Writer: 
Robert Tindol
Writer: 
Exclude from News Hub: 
No

Fluid Mechanics Experts Come Up with New Test for Heart Disease

PASADENA, Calif.—Building on years of research on the way that blood flows through the heart valves, researchers from the California Institute of Technology and Oregon Health Science University have devised a new index for cardiac health based on a simple ultrasound test. The index is now ready for use, and provides a new diagnostic tool for cardiologists in searching for the very early signs of certain heart diseases.

In the April 18 issue of the journal Proceedings of the National Academy of Sciences (PNAS), the researchers show how ultrasound imaging can be used to create an extremely detailed picture of the jet of blood as it squirts through the cardiac left ventricle. Previous work by the Caltech team members has shown that there is an ideal length-to-diameter ratio for jets of fluid passing through valves, which means that any variation from this ratio is indicative of a heart that pumping in an abnormal manner.

According to Mory Gharib, Liepmann Professor of Aeronautics and Bioengineering at Caltech, the ideal stroke ratio for cardiac function is four. This means that the length of a jet of fluid is ideal in power efficiency if it is four times the diameter of the valve it is traveling through. Since pioneering the study of vortices in biological fluid transport, Gharib has worked at applying it to biomedical applications. The PNAS article presents the latest breakthrough.

"Vortex formation defines the optimal output of the heart," says Gharib. "The size and shape of the vortex is a diagnostic tool because the information can reveal whether a patient's heart is healthy or if there are problems that will lead to enlargement."

In vivo and in vitro images taken by the Caltech team and Oregon collaborators show that a healthy heart tends to form vortex rings in the blood as it passes through the left ventricle. If the valve is too large in diameter, the blood tends not to form strong vortices, and if it is too narrow, the heart has much less energy efficiency and must work harder in order to produce the effect of a healthy heart. In either case, the result of a non-optimal vortex formation is indicative of a malfunctioning heart.

The index that the researchers have created is a guide for cardiologists, who will be able to use a noninvasive ultrasound machine to image the heart, just as obstetricians use ultrasound devices to image developing fetuses. Thus, the technique can be used when the patient is at rest, unlike treadmill tests that can themselves pose a certain danger because they require patients to exert themselves.

"We're not saying that this technique replaces traditional diagnostic tools, but that it is another way of confirming if something is wrong," Gharib adds.

"We want to give people an earlier warning of disease with a new method that is non-invasive and relatively inexpensive," says John Dabiri, an assistant professor of aeronautics and bioengineering at Caltech and coauthor of the paper.

Continuing in vitro studies led by Arash Kheradvar, a medical doctor and graduate student in bioengineering at Caltech, are focused on correlating the new diagnostic index with specific symptoms of heart failure.

In addition to Gharib, the lead author, and Dabiri and Kheradvar, the authors are Edmond Rambod, a former postdoctoral researcher at Caltech, and David J. Sahn, a cardiologist at Oregon Health Science University.

The title of the PNAS paper is "Optimal vortex formation as an index of cardiac health."

 

Writer: 
Robert Tindol
Writer: 

Caltech Physicists and International MINOS Team Discover New Details of Why Neutrinos Disappear

PASADENA, Calif.—Physicists from the California Institute of Technology and an international collaboration of scientists at the Department of Energy's Fermi National Accelerator Laboratory have observed the disappearance of muon neutrinos traveling from the lab's site in Illinois to a particle detector in Minnesota. The observation is consistent with an effect known as neutrino oscillation, in which neutrinos change from one kind to another.

The Main Injector Neutrino Oscillation Search (MINOS) experiment at Fermilab's site in Batavia, Illinois, revealed a value of delta m^2 = 0.0031 eV^2, a quantity that plays a crucial role in neutrino oscillations and hence the role of neutrinos in the evolution of the universe.

The MINOS detector concept and design was originated by Caltech physicist Doug Michael. Caltech physicists also built half of the massive set of scintillator planes for the five-kiloton far detector. Michael led the formulation and pushed forward the program to increase the intensity of the proton beams that are the source of the neutrinos used by MINOS, leading to the present results.

"Only a year ago we launched the MINOS experiment," said Fermilab director Pier Oddone. "It is great to see that the experiment is already producing important results, shedding new light on the mysteries of the neutrino."

Nature provides for three types of neutrinos, yet scientists know very little about these "ghost particles," which can traverse the entire Earth without interacting with matter. But the abundance of neutrinos in the universe, produced by stars and nuclear processes, may explain how galaxies formed and why antimatter has disappeared. Ultimately, these elusive particles may explain the origin of the neutrons, protons and electrons that make up all the matter in the world around us.

"Using a man-made beam of neutrinos, MINOS is a great tool to study the properties of neutrinos in a laboratory-controlled environment," said Stanford University professor Stan Wojcicki, spokesperson of the experiment. "Our first result corroborates earlier observations of muon neutrino disappearance, made by the Japanese Super-Kamiokande and K2K experiments. Over the next few years, we will collect about 15 times more data, yielding more results with higher precision, paving the way to better understanding this phenomenon. Our current results already rival the Super-Kamiokande and K2K results in precision."

Neutrinos are hard to detect, and most of the neutrinos traveling the 450 miles from Fermilab to Soudan, Minnesota-straight through the earth, no tunnel needed-leave no signal in the MINOS detector. If neutrinos had no mass, the particles would not change as they traverse the earth and the MINOS detector in Soudan would have recorded 177 +/- 11 muon neutrinos. Instead, the MINOS collaboration found only 92 muon neutrino events-a clear observation of muon neutrino disappearance and hence neutrino mass.

The deficit as a function of energy is consistent with the hypothesis of neutrino oscillations, and yields a value of delta m^2, the square of the mass difference between two different types of neutrinos, equal to 0.0031 eV^2 +/- 0.0006 eV^2 (statistical uncertainty) +/- 0.0001 eV^2 (systematic uncertainty). In this scenario, muon neutrinos can transform into electron neutrinos or tau neutrinos, but alternative models-such as neutrino decay and extra dimensions-are not yet excluded. It will take the recording of much more data by the MINOS collaboration to test more precisely the exact nature of the disappearance process. Details of the current MINOS results were presented by David Petyt of the University of Minnesota at a special seminar at Fermilab on March 30. On Friday, March 31, the MINOS collaboration commemorated Michael, who was the MINOS co-spokesperson, at a memorial service at Fermilab. Michael died on December 25, 2005, at the age of 45 after a yearlong battle with cancer.

The MINOS experiment includes about 150 scientists, engineers, technical specialists, and students from 32 institutions in six countries, including Brazil, France, Greece, Russia, the United Kingdom, and the United States. The institutions include universities as well as national laboratories. The U.S. Department of Energy provides the major share of the funding, with additional funding from the U.S. National Science Foundation and from the United Kingdom's Particle Physics and Astronomy Research Council (PPARC).

"The MINOS experiment is a hugely important step in our quest to understand neutrinos-we have created neutrinos in the controlled environment of an accelerator and watched how they behave over very long distances," said Professor Keith Mason, CEO of PPARC. "This has told us that they are not totally massless as was once thought, and opens the way for a detailed study of their properties. U.K. scientists have taken key roles in developing the experiment and in exploiting the data from it, the results of which will shape the future of this branch of physics." The Fermilab side of the MINOS experiment consists of a beam line in a 4,000-foot-long tunnel pointing from Fermilab to Soudan. The tunnel holds the carbon target and beam focusing elements that generate the neutrinos from protons accelerated by Fermilab's main injector accelerator. A neutrino detector, the MINOS "near detector" located 350 feet below the surface of the Fermilab site, measures the composition and intensity of the neutrino beam as it leaves the lab. The Soudan side of the experiment features a huge 6,000-ton particle detector that measures the properties of the neutrinos after their 450-mile trip to northern Minnesota. The cavern housing the detector is located half a mile underground in a former iron mine.

The MINOS neutrino experiment follows a long tradition of studying neutrino properties originated at Caltech by physics professor (and former LIGO laboratory director) Barry Barish. Earlier measurements by the Monopole Astrophysics and Cosmic Ray Observatory (MACRO) experiment at the Gran Sasso laboratory in Italy, led by Barish, also showed evidence for the oscillation of neutrinos produced by the interactions of cosmic rays in the atmosphere.

The MINOS result also complemets results from the K2K long-baseline neutrino experiment in Japan. In 1999-2001 and 2003-2004, the K2K experiment in Japan sent neutrinos from an accelerator at the KEK laboratory in Tsukuba to a particle detector in Kamioka, a distance of about 150 miles. Compared to K2K, the MINOS experiment uses a three times longer distance, and the intensity and the energy of the MINOS neutrino beam are higher than those of the K2K beam. These advantages have enabled the MINOS experiment to observe in less than one year about three times more neutrinos than the K2K experiment did in about four years.

"It is a great gift for me to hear this news," said Yoji Totsuka, former spokesperson of the Super-Kamiokande experiment and now director general of KEK. "Neutrino oscillation was first established in 1998, with cosmic-ray data taken by Super-Kamiokande. The phenomenon was then corroborated by the K2K experiment with a neutrino beam from KEK. Now MINOS gives firm results in a totally independent experiment. I really congratulate their great effort to obtain the first result in such a short timescale."

According to Harvey Newman, a professor of physics at Caltech who now leads the MINOS group, the campus group has also had a key role in the research and development of the MINOS scintillators and optical fibers.

"Our Caltech group, then led by Michael, also had a key role in the research and development of the scintillators and optical fibers that led to MINOS having enough light to measure the muons that signal neutrino events.

"We are also working on the analysis of electron-neutrino events that could lead to a determination of the subdominant mixing between the first and third neutrino flavors, which is one of the next major steps in understanding the mysterious nature of neutrinos and their flavor-mixings. We are also leading the analysis of antineutrinos in the data, and the prospects for MINOS to determine the mixing of antineutrinos, where comparison of neutrinos and antineutrinos will test one of the most fundamental symmetries of nature (known as CPT).

"We are leading much of the R&D for the next generation, 25-kiloton detector called NOvA. Building on our experience in MINOS, we have designed the basic 50-foot-long liquid scintillator cell which contains a single 0.8 mm optical fiber to collect the light (there will be approximately 600,000 cells). We will measure and optimize the design this year in Lauritsen [a physics building on the Caltech campus], in time for the start of NOvA construction that will be completed by approximately 2010. We've also started prototype work on a future generation of megaton-scale detectors for neutrino and ultrahigh-energy cosmic rays. This has generated a lot of interest among Caltech undergraduates, who are now starting to contribute to these developments in the lab."

###

 

More information on the MINOS experiment: http://www-numi.fnal.gov/>http://www-numi.fnal.gov/

List of institutions collaborating on MINOS: http://www-numi.fnal.gov/collab/institut.html

The members of the Caltech MINOS group: Caius Howcroft, Harvey Newman, Juan "Pedro" Ochoa, Charles Peck, Jason Trevor, and Hai Zheng.

Writer: 
Robert Tindol
Writer: 

Researchers Determine How Plants Decide Where to Position Their Leaves and Flowers

PASADENA, Calif.—One of the quests of modern biologists is to understand how cells talk to each other in order to determine where to form major organs. An international team of biologists has solved a part of this puzzle by combining state-of-the-art imaging and mathematical modeling to reveal how plants go about positioning their leaves and flowers.

In the January 31 issue of the Proceedings of the National Academy of Sciences (PNAS), researchers from the California Institute of Technology, the University of California at Irvine, and Lund University in Sweden reported their success in determining how a plant hormone known as auxin affects plant organ positioning. Experts already knew that auxin played some role in the development of plant organs, but the new study employs imaging techniques and computer modeling to propose a new theory about how the mechanism works.

The research involves the growing tip of the shoot of the plant Arabidopsis thaliana, a relative of the mustard plant that has been studied intensely by modern biologists. With its simple and very well understood genome, Arabidopsis lends itself to a wide variety of experiments.

The achievement of the researchers is their demonstration of how plant cells, with purely local information about their nearest neighbors' internal concentration of auxin, can communicate to determine the position of new flowers or leaves, which form in a regular pattern, with many cells separating the newly formed primordia (the first traces of an organ or structure). The authors theorize that the template the plant uses to make the larger parts comes from two mechanisms: a polarized transport of auxin into a feedback loop and a dynamic geometry arising from the growth and division of cells.

To capture the development, Beadle Professor of Biology Elliot Meyerowitz, division chair of the biology division at Caltech, and his team used green fluorescent proteins to mark specific cell types in the plant's meristem, the plant tissue in which regulated cell division, pattern formation, and differentiation give rise to plant parts like leaves and flowers.

The marked proteins allowed the group to image the cell's lineages through meristem development and differentiation leading to specific arrangement of leaves and reproductive growth, and also to follow changes in the concentration and movement of auxin.

Although the study applies specifically to the Arabidopsis plant, Meyerowitz says the mechanism is probably similar for other plants and even other biological systems in which patterning occurs in the course of development.

In addition to Meyerowitz, the paper's authors are Henrik Jönsson of Lund University, Marcus G. Heisler of Caltech's Division of Biology, Bruce E. Shapiro of Caltech's Biological Network Modeling Center, and Eric Mjolsness of UC Irvine's Institute of Genomics and Bioinformatics and department of computer science.

 

Writer: 
Robert Tindol
Writer: 

Fault That Produced Largest Aftershock Ever Recorded Still Poses Threat to Sumatra

PASADENA, Calif.—A mere three months after the giant Sumatra-Andaman earthquake and tsunami of December 2004, tragedy struck again when another great earthquake shook the area just to the south, killing over 2,000 Indonesians. Although technically an aftershock of the 2004 event, the 8.7-magnitude Nias-Simeulue earthquake just over a year ago was itself one of the most powerful earthquakes ever recorded. Only six others have had greater magnitudes.

In the March 31 issue of the journal Science, a team of researchers led by Richard Briggs and Kerry Sieh of the California Institute of Technology reconstruct the fault rupture that caused the March 28, 2005, event from detailed measurements of ground displacements. Their analysis shows that the fault broke along a 400-kilometer length, and that the length of the break was limited by unstrained sections of the fault on either end.

The researchers continue to express concern that another section of the great fault, south of the 2005 rupture, is likely to cause a third great earthquake in the not-too-distant future. The surface deformation they observed in the 2005 rupture area may well be similar to what will occur when the section to the south ruptures.

Briggs, a postdoctoral scholar in Caltech's new Tectonics Observatory, and his colleagues determined the vertical displacements of the Sumatran islands that are directly over the deeply buried fault whose rupture generated the 2005 earthquake. The main technique they used was the examination of coral heads growing near the shore. The tops of these heads stay just at the waterline, so if they move higher or lower, it indicates that there has been uplift or subsidence.

The researchers also obtained data on ground displacements from GPS stations that they had rushed into place after the 2004 earthquake. "We were fortunate to have installed the geodetic instruments right above the part that broke," says Kerry Sieh, who leads the Sumatran project of Caltech's Tectonics Observatory. "This is the closest we've ever gotten to such a large earthquake with continuously recording GPS instruments."

From the coral and GPS measurements, the researchers found that the 2005 earthquake was associated with uplift of up to three meters over a 400-kilometer stretch of the Sunda megathrust, the giant fault where Southeast Asia is overriding the Indian and Australian plates. This stretch lies to the south of the 1600-kilometer section of the fault that ruptured in 2004.

Actual slippage on the megathrust surface (about 25 kilometers below the islands) was over 11 meters. The data permitted calculation of the earthquake's magnitude at 8.6, nearly the same as estimates based on seismological recordings.

Most of the deaths in the 2005 earthquake were the direct result of shaking and the collapse of buildings. The earthquake did not trigger a disastrous tsunami comparable to the one that followed the 2004 event. In part, this was because the 2005 rupture was smaller-about one-quarter the length and one-half the slip.

In addition, the largest uplift lay under offshore islands, where there was no water to be displaced. Finally, by rising during the earthquake, the islands gained some instant, extra protection for when the tsunami reached them tens of minutes later.

The scientists were surprised to find that the southern end of the 2004 rupture and the northern end of the 2005 rupture did not quite abut each other, but were separated by a short segment under the island of Simeulue on which the amount of slip was nearly zero. They infer that this segment had not accumulated enough strain to rupture during either event-perhaps, they speculate, because it slips frequently and therefore relieves strain without generating large earthquakes.

Thus, this segment might act as a barrier to rupture propagation. A similar 170-kilometer "creeping" section of the San Andreas fault, between San Francisco and Los Angeles, separates the long section that produced Northern California's great 1906 earthquake from the long section that ruptured during Southern California's great 1857 earthquake.

The southern end of the 2005 rupture was at another short "creeping" segment or weak patch. "Both ends of the 2005 rupture seem to have been at the edges of a weak patch," Sieh explains. The 2005 event therefore probably represents a "characteristic earthquake" that has recurred often over geological time. In fact, old historical records suggest that a very similar earthquake was caused by a rupture of this segment in 1861.

Sieh suggests that installation of GPS instruments along the world's other subduction megathrusts could help more clearly to define those sections that creep stably versus the segments that are locked and thus more likely to break in infrequent, but potentially devastating, ruptures.

Previous work by the Caltech group and their Indonesian colleagues has shown that south of the southern creeping segment lies another locked segment, about 600 kilometers long, which has not broken since a magnitude 9.0 earthquake in 1833. Corals and coastlines along the southern segment record decades of continual, pronounced subsidence, similar to the behavior of the northern region prior to its abrupt uplift during the 2005 fault rupture.

"This southern part is very likely about ready to go again," Sieh says. "It could devastate the coastal communities of southwestern Sumatra, including the cities of Padang and Bengkulu, with a combined population of well over a million people. It could happen tomorrow, or it could happen 30 years from now, but I'd be surprised if it were delayed much beyond that."

Sieh and his colleagues are engaged in efforts to increase public awareness and preparedness for future great earthquakes and tsunamis in Sumatra.

The Science paper is titled "Deformation and slip along the Sunda megathrust in the great 2005 Nias-Simeulue earthquake." The other authors are Aron Meltzner, John Galetzka, Ya-ju Hsu, Mark Simons, and Jean-Philippe Avouac, all at Caltech's Tectonics Observatory; Danny Natawidjaja, Bambang Suwargadi, Nugroho Hananto, and Dudi Prayudi, all at the Indonesian Institute of Sciences; Imam Suprihanto of Jakarta; and Linette Prawirodirdjo and Yehuda Bock at the Scripps Institution of Oceanography.

The research was funded by the Gordon and Betty Moore Foundation, the National Science Foundation, and NASA.

Writer: 
Robert Tindol
Writer: 

Neuroscientists Discover the Neurons That Act As Novelty Detectors in the Human Brain

PASADENA, Calif.—By studying epileptic patients awaiting brain surgery, neuroscientists for the first time have located single neurons that are involved in recognizing whether a stimulus is new or old. The discovery demonstrates that the human brain not only has neurons for processing new information never seen before, but also neurons to recognize old information that has been seen just once.

In the March 16 issue of the journal Neuron, researchers from the California Institute of Technology, the Howard Hughes Medical Institute, and the Huntington Memorial Hospital report their success in distinguishing single-trial learning events from novel stimuli in six patients awaiting surgery for drug-resistant epileptic seizures. As part of the preparation for surgery, the patients have had electrodes implanted in their medial temporal lobes. Inserting small additional wires inside the clinical electrodes provides a way for researchers to observe the firing of individual human brain cells.

According to lead author Ueli Rutishauser, a graduate student in the computation and neural systems program at Caltech, the neurons are located in the hippocampus and amygdala, two limbic brain structures located deeply in the brain. Both regions are known to be important for learning and memory, but neuroscientists had never been able to establish the role of individual brain cells during single-trial learning until now.

"This is an unprecedented look at single-trial learning," explains Rutishauser, who works in the lab of Erin Schuman, a Caltech professor of biology and senior author of the paper. "It shows that single-trial learning is observable at the single-cell level. We've suspected it for a long time, but it has proven difficult to conduct these experiments with laboratory animals because you can't ask the animal whether it has seen something only once—500 times, yes, but not once."

With the patients volunteering to do perceptual studies while their brain activity is being recorded, however, such experiments are entirely possible. For the study, the researchers showed the six volunteers 12 different visual images, each presented once and randomly in one of four quadrants on a computer screen. Each subject was instructed to remember both the identity and position of the image or images presented.

After a 30-minute or 24-hour delay, each subject was shown previously viewed images or new images presented at the center of the screen, and asked whether each image was new or old. For each image identified as familiar, the subject was also asked to identify the quadrant in which the stimulus was originally presented.

The six subjects correctly recognized nearly 90 percent of the images they had already seen, but were less able to correctly recall the quadrant location in which the images had originally appeared. The researchers identified individual neurons that increased their firing rate either for novel stimuli or for familiar stimuli, but not both. These neurons thus responded differently to the same stimulus, depending on whether it was seen the first or the second time.

The fact that certain individual neurons of patients can be shown to fire only for recognition of something seen before, in fact, demonstrates that there is a "familiarity detector" neuron that explains why a person can have a feeling he or she has seen a face sometime in the past. Further, these neurons continue to fire and signal the familiarity of a stimulus, even when the subject mistakenly reports that the stimulus is new.

This type of neuron can account for subconscious recollections. "Even if the patients think they haven't seen the stimulus, their neurons still indicate that they have," Rutishauser says.

The third author of the paper is Adam Mamelak, who is a neurosurgeon at the Huntington Memorial Hospital and the Maxine Dunitz Neurosurgical Institute at Cedars-Sinai Medical Center.

Schuman is professor of biology and executive officer for neurobiology at Caltech and an investigator with the Howard Hughes Medical Institute.

 

Writer: 
Robert Tindol
Writer: 

Astronomers Discover a River of Stars Streaming Across the Northern Sky

PASADENA, Calif.—Astronomers have discovered a narrow stream of stars extending at least 45 degrees across the northern sky. The stream is about 76,000 light-years distant from Earth and forms a giant arc over the disk of the Milky Way galaxy.

In the March issue of the Astrophysical Journal Letters, Carl Grillmair, an associate research scientist at the California Institute of Technology's Spitzer Science Center, and Roberta Johnson, a graduate student at California State University Long Beach, report on the discovery.

"We were blown away by just how long this thing is," says Grillmair. "As one end of the stream clears the horizon this evening, the other will already be halfway up the sky."

The stream begins just south of the bowl of the Big Dipper and continues in an almost straight line to a point about 12 degrees east of the bright star Arcturus in the constellation Bootes. The stream emanates from a cluster of about 50,000 stars known as NGC 5466.

The newly discovered stream extends both ahead and behind NGC 5466 in its orbit around the galaxy. This is due to a process called tidal stripping, which results when the force of the Milky Way's gravity is markedly different from one side of the cluster to the other. This tends to stretch the cluster, which is normally almost spherical, along a line pointing towards the galactic center.

At some point, particularly when its orbit takes it close to the galactic center, the cluster can no longer hang onto its most outlying stars, and these stars drift off into orbits of their own. The lost stars that find themselves between the cluster and the galactic center begin to move slowly ahead of the cluster in its orbit, while the stars that drift outwards, away from the galactic center, fall slowly behind.

Ocean tides are caused by exactly the same phenomenon, though in this case it's the difference in the moon's gravity from one side of Earth to the other that stretches the oceans. If the gravity at the surface of Earth were very much weaker, then the oceans would be pulled from the planet, just like the stars in NGC 5466's stream.

Despite its size, the stream has never previously been seen because it is so completely overwhelmed by the vast sea of foreground stars that make up the disk of the Milky Way. Grillmair and Johnson found the stream by examining the colors and brightnesses of more than nine million stars in the Sloan Digital Sky Survey public database.

"It turns out that, because they were all born at the same time and are situated at roughly the same distance, the stars in globular clusters have a fairly unique signature when you look at how their colors and brightnesses are distributed," says Grillmair.

Using a technique called matched filtering, Grillmair and Johnson assigned to each star a probability that it might once have belonged to NGC 5466. By looking at the distribution of these probabilities across the sky, "the stream just sort of reached out and smacked us.

"The new stream may be even longer than we know, as we are limited at the southern end by the extent of the currently available data," he adds. "Larger surveys in the future should be able to extend the known length of the stream substantially, possibly even right around the whole sky."

The stars that make up the stream are much too faint to be seen by the unaided human eye. Owing to the vast distances involved, they are about three million times fainter than even the faintest stars that we can see on a clear night.

Grillmair says that such discoveries are important for our understanding of what makes up the Milky Way galaxy. Like earthbound rivers, such tidal streams can tell us which way is "down," how steep is the slope, and where the mountains and valleys are located.

By measuring the positions and velocities of the stars in these streams, astronomers hope to determine how much "dark matter" the Milky Way contains, and whether the dark matter is distributed smoothly, or in enormous orbiting chunks.

Writer: 
Robert Tindol
Writer: 

Caltech Scientists Discover the Part of the Brain That Causes Some People to Be Lousy in Math

PASADENA, Calif.—Most everyone knows that the term "dyslexia" refers to people who can't keep words and letters straight. A rarer term is "dyscalculia," which describes someone who is virtually unable to deal with numbers, much less do complicated math.

Scientists now have discovered the area of the brain linked to dyscalculia, demonstrating that there is a specific part of the brain essential for counting properly. In a report published in the March 13 issue of the Proceedings of the National Academy of Sciences (PNAS), researchers explain that the area of the brain known as the intraparietal sulcus (IPS), located toward the top and back of the brain and across both lobes, is crucial for the proper processing of numerical information.

According to Fulvia Castelli, a postdoctoral researcher at the California Institute of Technology and lead author of the paper, the IPS has been known for years as the brain area that allows humans to conceive of numbers. But she and her coauthors from University College London demonstrate that the IPS specifically determines how many things are perceived, as opposed to how much.

To explain how intimately the two different modes of thinking are connected, Castelli says to think about what happens when a person is approaching the checkout lines at the local Trader Joe's. Most of us are impatient sorts, so we typically head for the shortest line.

"Imagine how you really pick the shortest checkout line," says Castelli. "You could count the number of shoppers in each line, in which case you'd be thinking discretely in terms of numerosity.

"But if you're a hurried shopper, you probably take a quick glance at each line and pick the one that seems the shortest. In this case you're thinking in terms of continuous quantity."

The two modes of thinking are so similar, in fact, that scientists have had trouble isolating specific networks within the IPS because it is very difficult to distinguish between responses of how many and how much. To get at the difference between the two forms of quantity processing, Castelli and her colleagues devised a test in which subjects performed quick estimations of quantity while under functional MRI scans.

Specifically, the researchers showed subjects a series of blue and green flashes of light or a chessboard with blue and green rectangles. The subjects were asked to judge whether they saw more green or more blue, and their brain activity was monitored while they did so.

The results show that while subjects are exposed to the separate colors, the brain automatically counts how many objects are present. However, when subjects are presented with either a continuous blue and green light or a blurred chessboard on which the single squares are no longer visible, the brain does not count the objects, but instead estimates how much blue and green is visible.

"We think this identifies the brain activity specific to estimating the number of things," Castelli says. "This is probably also a brain network that underlies arithmetic, and when it's abnormal, may be responsible for dyscalculia."

In other words, dyscalculia arises because a person cannot develop adequate representations of how many things there are.

"Of course, dyscalculics can learn to count," Castelli explains. "But where most people can immediately tell that nine is bigger than seven, anyone with dcyscalculia may have to count the objects to be sure.

"Similarly, dyscalculics are much slower than people in general when they have to say how many objects there are in a set," she adds. "This affects everyday life, from the time when a child is struggling to keep up with arithmetic lessons in school to the time when an adult is trying to deal with money."

The good news is that the work of Castelli and her colleagues could lead to better tools for assessing whether a learning technique for people with dyscalculia is actually working. "Now that we have identified the brain system that carries out this function, we are in a position to see how dyscalculic brain activities differ from a normal brain," Castelli says.

"We should be in a position to measure whether an intervention is changing the brain function so that it becomes more like the normal pattern."

The article is titled "Discrete and analogue quantity processing in the parietal lobe: A functional MRI study." Castelli's coauthors are Daniel E. Glaser and Brian Butterworth, both researchers at the Institute of Cognitive Neuroscience at University College London.

Writer: 
Robert Tindol
Writer: 

Caltech Scientist Creates New Method for Folding Strands of DNA to Make Microscopic Structures

PASADENA, Calif.—In a new development in nanotechnology, a researcher at the California Institute of Technology has devised a way of weaving DNA strands into any desired two-dimensional shape or figure, which he calls "DNA origami."

According to Paul Rothemund, a senior research fellow in computer science and computation and neural systems, the new technique could be an important tool in the creation of new nanodevices, that is, devices whose measurements are a few billionths of a meter in size.

"The construction of custom DNA origami is so simple that the method should make it much easier for scientists from diverse fields to create and study the complex nanostructures they might want," Rothemund explains.

"A physicist, for example, might attach nano-sized semiconductor 'quantum dots' in a pattern that creates a quantum computer. A biologist might use DNA origami to take proteins which normally occur separately in nature, and organize them into a multi-enzyme factory that hands a chemical product from one enzyme machine to the next in the manner of an assembly line."

Reporting in the March 16th issue of Nature, Rothemund describes how long single strands of DNA can be folded back and forth, tracing a mazelike path, to form a scaffold that fills up the outline of any desired shape. To hold the scaffold in place, 200 or more DNA strands are designed to bind the scaffold and staple it together.

Each of the short DNA strands can act something like a pixel in a computer image, resulting in a shape that can bear a complex pattern, such as words or images. The resulting shapes and patterns are each about 100 nanometers in diameter-or about a thousand times smaller than the diameter of a human hair. The dots themselves are six nanometers in diameter. While the folding of DNA into shapes that have nothing to do with the molecule's genetic information is not a new idea, Rothemund's efforts provide a general way to quickly and easily create any shape. In the last year, Rothemund has created half a dozen shapes, including a square, a triangle, a five-pointed star, and a smiley face-each one several times more complex than any previously constructed DNA objects. "At this point, high-school students could use the design program to create whatever shape they desired,'' he says.

Once a shape has been created, adding a pattern to it is particularly easy, taking just a couple of hours for any desired pattern. As a demonstration, Rothemund has spelled out the letters "DNA," and has drawn a rough picture of a double helix, as well as a map of the western hemisphere in which one nanometer represents 200 kilometers.

Although Rothemund has hitherto worked on two-dimensional shapes and structures, he says that 3-D assemblies should be no problem. In fact, researchers at other institutions are already using his method to attempt the building of 3-D cages. One biomedical application that Rothemund says could come of this particular effort is the construction of cages that would sequester enzymes until they were ready for use in turning other proteins on or off.

The original idea for using DNA to create shapes and structures came from Nadrian Seeman of New York University. Another pioneer in the field is Caltech's Assistant Professor of Computer Science and Computation and Neural Systems Erik Winfree, in whose group Rothemund works.

"In this research, Paul has scored a few unusual `firsts' for humanity," Winfree says. "In a typical reaction, he can make about 50 billion 'smiley-faces.' I think this is the most concentrated happiness ever created.

"But the applications of this technology are likely to be less whimsical," Winfree adds. "For example, it can be used as a 'nanobreadboard' for attaching almost arbitrary nanometer-scale components. There are few other ways to obtain such precise control over the arrangement of components at this scale."

The title of the Nature paper is "Folding DNA to create nanoscale shapes and patterns."

Writer: 
Robert Tindol
Writer: 

Researchers Create New "Matchmaking Service" Computer System to Study Gene Interactions

PASADENA, Calif.—Biologists in recent years have identified every individual gene in the genomes of several organisms. While this has been quite an accomplishment in itself, the further goal of figuring out how these genes interact is truly daunting.

The difficulty lies in the fact that two genes can pair up in a gigantic number of ways. If an organism has a genome of 20,000 genes, for example, the total number of pairwise combinations is a staggering total of 200 million possible interactions.

Researchers can indeed perform experiments to see what happens when the two genes interact, but 200 million is an enormous number of experiments, says Weiwei Zhong, a postdoctoral scholar at the California Institute of Technology. "The question is whether we can prioritize which experiments we should do in order to save a lot of time."

To get at this issue, Zhong and her supervising professor, Paul Sternberg, have derived a method of database-mining to make predictions about genetic interactions. In the current issue of the journal Science, they report on a procedure for computationally integrating several sources of data from several organisms to study the tiny worm C. elegans, or nematode, an animal commonly used in biological experiments.

This is possible because various organisms have a large number of genes in common. Humans and nematodes, for example, are similar in 40 percent of their genes. Therefore, a genetic-interaction network provides a faster and better way at determining how certain genes interact. Such a network also provides information about whether anyone has ever done an experiment to determine the interaction of two particular genes in one of several species.

"This process works like a matchmaking service for the genes," says Zhong. "It provides you with candidate matches that most likely will be interacting genes, based upon a number of specified features."

The benefit, she adds, is that biologists do not need to do a huge number of random experiments to verify if two genes indeed interact. Therefore, instead of the experimenter having to run 20,000 experiments to see if two genes randomly chosen from the genome of a 20,000-gene organism interact, they might get by with 10 to 50 experiments.

"The beneft is that you can be through in a month instead of years," says Sternberg. "Also, you can do experiments that are careful and detailed, which may take a day, and still be finished in a month."

To build the computational system, the researchers constructed a "training set" for pairs of nematode gene interactions. The "positives" for genetic interactions were taken from 4,775 known pairwise interactions from nematodes.

By "training" the system, Zhong and Sternberg arrived at a way to rapidly arrive at predictions of whether two genes would interact or not.

According to Sternberg, who is the Morgan Professor of Biology at Caltech, the results show that the data-mining procedure works. Also, the results demonstrate that the federal money spent on sequencing genomes-and the comparatively modest expenditures that have gone toward the improvement of biological data processing-have been dollars well spent.

"This is one of a suite of tools and methods people are coming up with to get more bang for the buck," he says.

In particular, Sternberg and Zhong cite the ongoing WormBase project, now in its sixth year as a database funded by the National Institutes of Health for understanding gene interactions of nematodes. WormBase received $12 million in new funding in 2003, and the project is already leading to new database tools ultimately aimed at promoting knowledge of how genes interrelate.

The new study by Zhong and Sternberg is not directly a product of WormBase, but nevertheless mines data from that and other sources. In fact, the study compiles data from several model organisms to reconstruct a gene-interaction network for the nematode.

Zhong says that the system is not perfect yet, because "false negatives" can still arise if the information is simply not in the database, or if the computer fails to recognize two genes as orthologs (i.e., essentially the same gene). "But it will get better," she adds.

"Choosing how to combine these data is the big deal, not the computational ability of the hardware," says Sternberg. "You can also see how the computer made the call of whether two genes should interact. So it's not a black box, but all transparent; and to biologists, that's really valuable. And finally, it's in the public domain."

Finally, the system provides a good window into the manner in which the biology of the future is emerging, Sternberg says. Zhong, for example, has a doctorate in biology and a master's in computer science: she spends about as much time working on computer databases as she does in the lab with the organisms themselves.

"This is the new generation of biologists," Sternberg says.

The study is titled "Genome-wide Prediction of C. elegans Genetic Interactions," and is published in the March 10 issue of Science.

Writer: 
Robert Tindol
Writer: 

Pages