Survival of the Fittest . . . Or the Flattest?

Darwinian dogma states that in the marathon race of evolution, the genotype that replicates the fastest, wins. But now scientists at the California Institute of Technology say that's true, but when you factor in another basic process of evolution, that of mutations, it's often the tortoise that defeats the hare.

It turns out that mutations, the random changes that can take place in a gene, are the wild cards in the great race. The researchers found that at high mutation rates, genotypes with a slower replication rate can displace faster replicators if the former has a higher "robustness"—or fitness—against mutations; that is, if a mutation is, on average, less harmful to the slower replicator than to the faster one. The research, to appear in the July 19th issue of the journal Nature, was conducted by several investigators, including Claus Wilke, a postdoctoral scholar, Chris Adami, who holds joint appointments at Caltech and the Jet Propulsion Lab, Jia Lan Wang, an undergraduate student, Charles Ofria, a former Caltech graduate student now at Michigan State University; and Richard Lenski, a professor at Michigan State.

In a takeoff of a common Darwinian phrase, they coin their work "survival of the flattest" rather than the survival of the fittest. The idea is this: If a group of similar genotypes with a faster replication rate occupies a "high and narrow peak" in the landscape of evolutionary fitness, while a different group of genotypes that replicates more slowly occupies a lower and flatter, or broader, peak, then, when mutation rates are high, the broadness of the lower peak can offset the height of the higher peak. That means the slower replicator wins. " In a way, organisms can trade replication speed for robustness against mutations and vice versa," says Wilke. "Ultimately, the organisms with the most advantageous combination of both will win."

Discerning such evolutionary nuances, though, is no easy task. To test an evolutionary theory requires generations and generations of an organism to pass. To make matters worse, the simplest living system, namely that which has been a precursor to all living systems on Earth, has been replaced by much more complicated systems over the last four billion years.

Wilke and his collaborators found the solution in the growing power of computers by constructing, via a software program, an artificial living system that behaves in remarkably lifelike ways. Such digital creatures evolve in the same way biological life forms do; they live in, and adapt to, a virtual world created for them inside a computer. Doing so offers an opportunity to test generalizations about living systems that may extend beyond the organic life that biologists usually study. Though this research did not involve actual living organisms, one of the authors, Richard Lenski, is a leading expert on the evolution of Escherichia Coli bacteria. Lenski believes that digital organisms are sufficiently realistic to yield biological insights, and he continues his research on both E. coli and digital organisms.

In their digital world, the organisms are self-replicating computer programs that compete with one another for CPU (central processing units) cycles, which are their limiting resource. Digital organisms have genomes in the form of a series of instructions, and phenotypes that are obtained by execution of their genomic program. The creatures physically inhabit a reserved space in the computer's memory—an "artificial Petri dish"—and they must copy their own genomes. Moreover, their evolution does not proceed toward a target specified in advance, but rather proceeds in an open-ended manner to produce phenotypes that are more successful in a particular environment.

Digital creatures lend themselves to evolutionary experiments because their environment can be readily manipulated to examine the importance of various selective pressures. In this study, though, the only environmental factor varied was the mutation rate. Whereas in nature, mutations are random changes that can take place in DNA, a digital organism's mutations occur in the random changes of its particular computer program. A command may be switched, for example, or a sequence of instructions copied twice.

For this study, the scientists derived 40 pairs of digital organisms that were derived from 40 different ancestors in identical selective environments. The only difference was that one of each pair was subjected to a four-fold higher mutation rate. In 12 cases out of the 40, the dominant genotype that evolved at the lower mutation rate replicated at a pace that was 1.5-fold faster than its counterpart at the higher mutation rate.

Next, the scientists allowed each of these 12 disparate pairs to compete across a range of mutation rates. In each case, as the mutation rate was increased, the outcome of competition switched to favor the genotype that had the lower replication rate. The researchers believe that these slower genotypes, although they occupied a lower fitness peak and were located in flatter regions of the fitness surface, were, as a result, more robust with respect to mutations.

The digital organisms have the advantage that many generations can be studied in a brief period of time. But the researchers believe a colony of asexual bacteria, subjected to the same stresses as the digital organisms, would probably face similar consequences.

The concept of "survival of the flattest" seems to imply, the authors say, that, at least for populations subject to a high mutation rate, selection acts upon a group of mutants rather than the individual. Thus, under such circumstances, genotypes that unselfishly produce mutant genotypes of high fitness are selected for, and supported in turn, by other mutants in that group. The study therefore reveals that "selfish genes," while being the successful strategy at low mutation rates, may be outcompeted by unselfish ones when the mutation rate is high.

Up to 6 million votes lost in 2000 presidential election, Voting Technology Project reveals

Though over 100 million Americans went to the polls on election day 2000, as many as 6 million might just have well have spent the day fishing. Researchers at Caltech and MIT call these "lost votes" and think the number of uncounted votes could easily be cut by more than half in the 2004 election with just three simple reforms.

"This study shows that the voting problem is much worse than we expected," said Caltech president David Baltimore, who initiated the nonpartisan study after the November election debacle.

"It is remarkable that we in America put up with a system where as many as six out of every hundred voters are unable to get their vote counted. Twenty-first-century technology should be able to do much better than this," Baltimore said.

According to the comprehensive Caltech-MIT study, faulty and outdated voting technology together with registration problems were largely to blame for many of the 4-to-6 million votes lost during the 2000 election.

With respect to the votes that simply weren't counted, the researchers found that punch-card methods and some direct recording electronic (DRE) voting machines were especially prone to error. Lever machines, optically scanned, and hand-counted paper ballots were somewhat less likely to result in spoiled or "residual" votes. Optical scanning, moreover, was better than lever machines.

As for voter registration problems, lost votes resulted primarily from inadequate registration data available at the polling places, and the widespread absence of provisional ballot methods to allow people to vote when ambiguities could not be resolved at the voting precinct.

 

The three most immediate ways to reduce the number of residual votes would be to:

· replace punch cards, lever machines, and some underperforming electronic machines with optical scanning systems;

· make countywide or even statewide voter registration data available at polling places;

· make provisional ballots available.

The first method, it is estimated, would save up to 1.5 million votes in a presidential election, while the second and third would combine to rescue as many as 2 million votes.

"We could bring about these reforms by spending around $3 per registered voter, at a total cost of about $400 million," says Tom Palfrey, a professor of economics and political science who headed the Caltech effort. "We think the price of these reforms is a small price to pay for insurance against a reprise of November 2000."

Approximately half the cost would go toward equipment upgrades, while the remainder would be used to implement improvements at the precinct level, in order to resolve registration problems on the spot. The $400 million would be a 40 percent increase over the money currently spent annually on election administration in the United States.

In addition to these quick fixes, the report identifies five long-run recommendations.

· First, institute a program of federal matching grants for equipment and registration system upgrades, and for polling-place improvement.

· Second, create an information clearinghouse and data-bank for election equipment and system performance, precinct-level election reporting, recounts, and election finance and administration.

· Third, develop a research grant program to field-test new equipment, develop better ballot designs, and analyze data on election system performance.

· Fourth, set more stringent and more uniform standards on performance and testing.

· Fifth, create an election administration agency, independent of the Federal Election Commission. The agency would be an expanded version of the current Office of Election Administration, and would oversee the grants program, serve as an information clearinghouse and databank, set standards for certification and recertification of equipment, and administer research grants.

The report also proposes a new modular voting architecture that could serve as a model for future voting technology. The Caltech-MIT team concludes that this modular architecture offers greater opportunity for innovation in ballot design and security.

Despite the fact that there is strong pressure to develop Internet voting, the team recommends a go-slow approach in that direction. The prospect of fraud and coercion, as well as hacking and service disruption, led the team to recommend a cautious approach to Internet voting. Also, many Americans are still unfamiliar with the technology.

"The Voting Technology Project is part of a larger effort currently underway—involving many dedicated election officials, researchers, and policy makers—to restore confidence in our election system," commented Steve Ansolabehere, a professor of political science who headed up the MIT team. "We are hopeful that the report will become a valuable resource, and that it will help to bring about real change in the near future."

Baltimore and MIT president Charles Vest announced the study on December 15, two days after the outcome of the presidential election was finally resolved. Funded by a $250,000 grant from the Carnegie Corporation, the study was intended to "minimize the possibility of confusion about how to vote, and offer clear verification of what vote is to be recorded," and "decrease to near zero the probability of miscounting votes."

The report is publicly available on the Caltech-MIT Voting Technology Project Website:

http://vote.caltech.edu

Writer: 
Robert Tindol
Writer: 

Factors causing high mutations could have led to origin of sexual reproduction, study shows

Biologists have long known the advantages of sexual reproduction to the evolution and survival of species. With a little sex, a fledgling creature is more likely to pass on the good mutations it may have, and more able to deal with the sort of environmental adversity that would send its asexual neighbors floundering into the shallow end of the gene pool.

The only problem is that it's hard to figure out how sex got started in the first place. Not only do many primitive single-celled organisms do just fine with asexual reproduction, but mathematical models show that a sexual mutant in an asexual population is most likely not favored to compete successfully and pass on its genes.

Now, researchers from the California Institute of Technology and the Jet Propulsion Laboratory, using "digital organisms" and RNA, have concluded that established asexual bacteria could be nudged to evolve into sexual reproduction if there are certain forms of stress on the environment, such as radiation or catastrophic meteor or comet impacts that give rise to a high rate of mutations.

In an article that has significant implications for understanding the origin of sexual reproduction in the early world, Claus Wilke of Caltech and Chris Adami, who holds joint appointments at Caltech and JPL, report that a change in conditions causing higher rates of mutations can lead an asexual population to an adaptation that may be sufficient to give mutant individuals a greater advantage if those mutants reproduce sexually.

The paper, published in the July 22 issue of the Royal Society journal Proceedings: Biological Sciences B, builds on earlier work by Adami and his collaborators, showing that digital organisms—that is, self-replicating computer programs designed to closely resemble the life cycles of living bacteria—can actually adapt to become more robust.

"What we showed in the other paper," says Adami, "is that if you transfer a fragile organism that evolved with a small mutation rate into a high-mutation-rate environment, it will adapt to this environment by becoming more robust."

One of the reasons the origin of sexual reproduction has been a mystery is because of an effect known as "mutation accumulation." Organisms tend to adapt so as to decrease the effects of mutations in order to become less vulnerable.

But this kind of robustness is poisonous, because with sexual recombination, deleterious mutations would simply accumulate in the organism and thus lead to a gradual loss of genes. This handicap of sexual creatures would be enough to guarantee their extinction when competing against asexual ones.

This can be avoided if the effects of mutations are compounding—that is, if the effect of two or more simultaneous deleterious mutations is worse than the combined effect of each of the mutations. In this manner, an organism may be robust to a few mutations, but incapable of surviving a large number of mutations, so that mutations cannot accumulate.

The new revelation by Wilke and Adami is that there is a conservation law at work in the relationship between the compounding of mutations and the fitness decay due to single mutations. This law says that robustness to a few mutations implies vulnerability to a large number, while robustness to many mutations must go hand in hand with vulnerability to single mutations.

Thus, increasing robustness to single mutations automatically makes multiple mutations intolerable, which removes organisms with multiple deleterious mutations from the population and allows sexual recombination to reap the rewards from sharing beneficial mutations.

Because stressful environments with high mutation rates push organisms to become robust to single mutations, the conservation law guarantees that this evolutionary pressure also pushes asexual organisms on to the road toward sexual recombination.

The researchers studied the evolution of digital organisms and RNA secondary structure, because accurate data on the decay of fitness and the effect of multiple mutations (whether they are compounding or mitigating) for living organisms is quite rare. For the RNA study, the researchers used known sequences with well-understood folds and then tried various mutations to see which mutations mattered and which didn't, in a system that computationally predicts RNA secondary structure. The results supported the conservation law.

Though the study did not involve actual living organisms, Adami has collaborated in the past with experts on bacteria to demonstrate that the digital organisms are indeed realistic. In an earlier 1999 study, for example, Adami's collaborator was a leading expert on the evolution of the E. coli bacteria.

The digital organisms have the advantage that many generations can be studied in a brief period of time, but Adami thinks a colony of asexual bacteria subjected to the stress imposed on the digital organisms in the experiment would probably face similar consequences.

"If you took a population of E. coli and subjected it to high mutation rates for many years—for example by irradiation or introducing mutagenic factors—at some point you might observe that exchange of genetic material, a precursor to sexual recombination, would become favorable to the organisms and thus selected for, if at the same time the environment changes fast enough that enough mutations are beneficial," he says.

"But that's a very difficult experiment with living organisms because of the time involved, and because it is difficult to construct constantly changing environments in a petri dish. This is easier with digital organisms, and will probably be first observed there.

"The reason the origin of sexual reproduction has been such a big mystery is that we look at the world as it is now," Adami says. "But the early world was a much more stressful place, sometimes changing very rapidly.

"We can't say how or when sexual reproduction came to take a hold in nature, but we can now say that high mutation rates can, under the right conditions, force an asexual organism to become sexual."

Adami earned his doctorate in theoretical physics at SUNY Stony Brook. He is a faculty associate in the computation and neural systems department at Caltech, and a research scientist at JPL. He is the author of the 1998 book Introduction to Artificial Life. Wilke, also a physicist, is a postdoctoral fellow in Adami's Digital Life Laboratory.

The article appears in Proceedings: Biological Sciences B, volume 268, number 1475, page 1469. The cover date is 22 July 2001, but the article is available on-line at http://www.pubs.royalsoc.ac.uk/proc_bio/proc_bio.html

Writer: 
RT
Writer: 

Caltech researchers successfully raise obeliskwith kite to test theory about ancient pyramids

When people think about the building of the Egyptian pyramids, they probably have a mental image of thousands of slaves laboriously rolling massive stone blocks with logs and levers. But as one Caltech aeronautics professor is demonstrating, the task may have been accomplished by just four or five guys who flew the stones into place with a kite.

On Saturday, June 23, Mory Gharib and his team raised a 6,900-pound, 15-foot obelisk into vertical position in the desert near Palmdale by using nothing more than a kite, a pulley system, and a support frame. Though the blustery winds were gusting upwards of 22 miles per hour, the team set the obelisk upright on second try.

"It actually lifted up the kite flyer, Eric May, so we had to kill the kite quickly," said Gharib. "But we finished it off the second time."

Emilio Castano Graff, a Caltech undergraduate who tackled the problem under the sponsorship of the Summer Undergraduate Research Fellowship (SURF) program, was also pleased with the results.

"The wind wasn't that great, but basically we're happy with it," he said.

Despite the lack of a steady breeze, the team raised the obelisk in about 25 seconds—so quickly, in fact, that the concrete-and-rebar object was lifted off the ground and swung free for a few seconds. Once the motion had stabilized, the team lowered the obelisk into an upright position.

The next step is to build an even bigger obelisk to demonstrate that even the mammoth 300-ton monuments of ancient Egypt—not to mention the far less massive building blocks of Egypt's 90-odd pyramids—could have been raised with a fraction of the effort that modern researchers have assumed.

Gharib has been working on the project since local business consultant Maureen Clemmons contacted him and his Caltech aeronautics colleagues two years ago. Clemmons had seen a picture in Smithsonian magazine in 1997 of an obelisk being raised, and came up with the idea that the ancient Egyptian builders could have used kites to accomplish the task more easily. All she needed was an aeronautics expert with the proper credentials to field-test her theory.

Clemmons' kite theory was a drastic departure from conventional thinking, which holds that thousands of slaves used little more than brute force and log-rolling to put the stone blocks and obelisks in place. No one has ever come up with a substantially better system for accomplishing the task, and even today the moving of heavy stones would be quite labor-intensive without power equipment.

To demonstrate how little progress was made in the centuries after the age of the pyramids had passed, Gharib points out that, in 1586, the Vatican moved a 330-ton Egyptian obelisk to St. Peter's Square. It is known that lifting the stone into vertical position required 74 horses and 900 men using ropes and pulleys.

It is a credit to Clemmons' determination that the idea is so far along in the testing stage. With no scientific or archaeological training, she has managed to marshal the efforts of family, friends, and other enthusiasts to work on a theory that could well revolutionize the knowledge of ancient engineering practices—and perhaps lead to a reinterpretation of certain ancient symbols as well.

In the course of researching the tools available to the Egyptian pyramid builders, she has discovered, for example, that a brass ankh—long assumed to be merely a religious symbol—makes a very good carabiner for controlling a kite line. And a type of insect commonly found in Egypt could have supplied a kind of shellac to make linen sails hold wind. As for objections to the use of pulleys, the team's intention was always to progress later—actually, "regress" might be a more appropriate word— to the windlasses apparently used to hoist sails on Egyptian ships.

"The whole approach has been to downgrade the technology," Gharib says. "We first wanted to show that a kite could raise a huge weight at all. Now that we're raising larger and larger stones, we're also preparing to replace the steel scaffolding with wooden poles and the steel pulleys with wooden pulleys like the ones they may have used on Egyptian ships.

For Gharib, the idea of accomplishing heavy tasks with limited manpower is appealing from an engineer's standpoint because it makes more logistical sense.

"You can imagine how hard it is to coordinate the activities of hundreds if not thousands of laborers to accomplish an intricate task," says Gharib. "It's one thing to send thousands of soldiers to attack another army on a battlefield. But an engineering project requires everything to be put precisely into place.

"I prefer to think of the technology as simple, with relatively few people involved."

The concept Gharib has developed with Graff is to build a simple structure around the obelisk with a pulley system mounted somewhat forward of the stone. That way, the base of the obelisk will drag the ground for a few feet as the kite lifts the stone, and the stone will then be quite stable once it has been pulled up to a vertical position. If the obelisk were raised with the base as a pivot, the stone would tend to swing past the vertical position and fall the other way.

The top of the obelisk is tied with ropes threaded through the pulleys and attached to the kite. A couple of workers guide the operation with ropes attached to the pulleys.

Of course, no one has any idea if the ancient Egyptians actually moved stones or anything else with kites and pulleys, but Clemmons has found some tantalizing hints that the project is on the right track. On a building frieze now displayed in a Cairo museum, there is a wing pattern in bas relief that does not resemble any living bird. Directly below are several men standing near vertical objects that could be ropes.

Gharib's interest is not necessarily to make archeological contributions, but to demonstrate that the technique is viable.

"We're not Egyptologists," he says. "We're mainly interested in determining whether there is a possibility that the Egyptians were aware of wind power, and whether they used it to make their lives better."

Now that Gharib and his team have successfully raised the four-ton concrete obelisk with everyone watching, they will proceed to a 10-ton stone, then perhaps to 20 tons. Eventually they hope to receive permission to raise one of the obelisks that still lies in an Egyptian quarry.

"In fact, we may not even need a kite. It could be we can get along with just a drag chute."

Finally, one might ask whether there was and is sufficient wind in Egypt for a kite or a drag chute to fly. The answer is that steady winds of up to 30 miles-per-hour are not unusual in the areas where the pyramids and obelisks are found.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT
Writer: 

Hensen's node in chicken embryos governs movement of neural cells, study shows

For us living creatures with backbones, existence begins as a single fertilized cell that then subdivides and grows into a fetus with many, many cells. But the details of how those cells end up as discrete organs instead of undifferentiated heaps of cells is only now being understood in microscopic detail.

Why, for example, should some of the cells migrate to the region that will become the brain, while others travel netherward to make a spinal cord? Although some details are known about which cells contribute to particular regions of the nervous system and which signals help to establish the organization of the brain, much less is known about factors that guide the development of the spinal cord.

In a new study, researchers from the California Institute of Technology have gained unprecedented information about the molecular signals and cell movements that coordinate to form the spinal cord. The study takes advantage of recently developed bioimaging and cell labeling techniques to follow individual cell movements in a developing chick embryo through a clear "window" cut into a fertilized egg. The results, reported in the June issue of the journal Nature Cell Biology, suggest that a proliferative stem zone at the tail end of the growing embryo contributes descendants to the growing neuraxis.

"The basic idea is that descendants of cells from Hensen's node, the structure that lays down the trunk, are sequentially distributed along the elongating spinal cord" says Luc Mathis, a former researcher in the lab of Caltech biology professor Scott Fraser, and lead author of the paper. "In the past, we did not have the ability to follow individual cells in living vertebrate embryos and could not determine how neural precursor cells could remain within Hensen's node, while some descendants leave it to form the spinal cord. "

In the paper, the researchers explain that neural precursor cells get displaced into the neural axis by the proliferation in Hensen's node. The researchers labeled cells near Hensen's node in 40-hour old chick embryos by using an external electric field to deliver an expression vector encoding green fluorescent protein (GFP) into cells, a process called electroporation. Using state-of-the-art imaging techniques developed by postdoctoral researcher Paul Kulesa, the group recorded the motion of fluorescent cells in ovo using a confocal microscope set up for time-lapse imaging and surrounded by a heated chamber to maintain embryo development.

"As the cells proliferate, some progenitors are displaced from the stem zone to become part of the neural plate and spinal cord," Mathis says. "Our analyses show that the Hensen's node produces daughter cells that are eventually displaced out of the node zone on the basis of their position in relation to other proliferating cells, and not on the basis of asymmetric cell divisions."

The paper also addresses the molecular signaling involved in the spreading of the cells. Previous work has shown that fibroblast growth factor (FGF) is somehow involved in formation of the posterior nervous system. To test the possibility that FGF could act by maintaining the stem zone of cell proliferation, the researchers disrupted FGF signaling within Hensen's node. Indeed, the result was a seriously shortened spinal cord and premature exit of cells from the node, indicating that FGF is required for the proliferation of neural precursor cells in the stem zone that generates the spinal cord.

A structure similar to Hensen's node—called simply a "node"—is found in mammals, and analogous zones are found in other vertebrates as well. The cell behavior and genetic control discovered in the chick might also be responsible for the development of the spinal cord in mammals, including humans.

"This new understanding of the formation of the spinal cord is the result of a fusion between hypotheses that arose during previous studies that I had conducted in France, the great embryological background and imaging facilities provided by Scott Fraser, and the original experimental systems of cell tracking developed by Paul Kulesa" concludes Mathis."

Scott Fraser is the Anna L Rosen Professor of Biology and the director of the Biological Imaging Center of Caltech's Beckman Institute. Luc Mathis is a former researcher at the Biological Imaging Center who is currently at the Pasteur Institute in Paris. Paul Kulesa is a senior research fellow supported by the computational molecular biology progam and associated with the Biological Imaging Center.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Caltech Uses Fluorescent Protein to Visualize the Work of Living Neurons

Neuroscientists have long suspected that dendrites—the fine fibers that extend from neurons—can synthesize proteins. Now, using a molecule they constructed that "lights up" when synthesis occurs, a biologist and her colleagues from the California Institute of Technology have proven just that.

Erin M. Schuman, an associate professor of biology at Caltech and an assistant investigator with the Howard Hughes Medical Institute, along with colleagues Girish Aakalu, Bryan Smith, Nhien Nguyen, and Changan Jiang, published their findings last month in the journal Neuron. Proving that protein synthesis does indeed occur in intact dendrites suggests the dendrites may also have the capacity to adjust the strength of connections between neurons. That in turn implies they may influence vital neural activities such as learning and memory.

Schuman and colleagues constructed a so-called "reporter" molecule that, when introduced into neurons, emits a telltale glow if protein synthesis is occurring. "There was early evidence that protein-synthesis machinery was present in dendrites," says Schuman. "Those findings were intriguing because they implied that dendrites had the capacity to make their own proteins."

The idea that dendrites should be able to synthesize proteins made sense to Schuman and others because it was more economical and efficient. "It's like the difference between centralized and distributed freight shipping," she says. "With central shipping, you need a huge number of trucks that drive all over town, moving freight from a central factory. But with distributed shipping, you have multiple distribution centers that serve local populations, with far less transport involved."

Previous studies had indicated that, in test tubes, tiny fragments of dendrites still had the capacity to synthesize proteins. Schuman and her colleagues believed that visualizing local protein synthesis in living neurons would provide a more compelling picture than was currently available.

The scientists began their efforts to create a reporter molecule by flanking a gene for a green fluorescent protein with two segments of another gene for a particular enzyme. Doing this ensured that the researchers would target the messenger RNA (mRNA) for their reporter molecule to dendrites.

Next, in a series of experiments, the group inserted the reporter molecule into rat neurons in culture, and then triggered protein synthesis using a growth factor called BDNF. By imaging the neurons over time, the investigators showed that the green fluorescent protein was expressed in the dendrites following BDNF treatment—proof that protein synthesis was taking place. Going a step further, the researchers showed they could cause the fluorescence to disappear by treating the neurons with a drug that blocked protein synthesis.

Schuman and her colleagues also addressed whether proteins synthesized in the main cell body, called the soma, could have diffused to the dendrites, rather than the dendrites themselves performing the protein synthesis. The researchers proved the proteins weren't coming from the soma by simply snipping the dendrites from the neurons, while maintaining their connection to their synaptic partners. Sure enough, the isolated dendrites still exhibited protein synthesis.

Intriguingly, says Schuman, hot spots of protein synthesis were observed within the dendrites. By tracking the location of the fluorescent signal over time, the researchers could see that these hotspots waxed and waned consistently in the same place. "The main attraction of local protein synthesis is that it could endow synapses with the capacity to make synapse-specific changes, which is a key property of information-storing systems," says Schuman. "The observation of such hot spots suggests there are localized areas of protein synthesis near synapses that may provide new proteins to synapses nearby."

Schuman and her colleagues are now applying their reporter molecule system to more complex brain slices and whole mice. "In the whole animals, we're exploring the role of dendritic protein synthesis in information processing and animal learning and behavior," says Schuman.

Writer: 
MW
Writer: 

Brightest Quasars Inhabit Galaxies withStar-Forming Gas Clouds, Scientists Discover

A team of scientists at the California Institute of Technology and the State University of New York at Stony Brook has found strong evidence that high-luminosity quasar activity in galaxy nuclei is linked to the presence of abundant interstellar gas and high rates of star formation.

In a presentation at the summer meeting of the American Astronomical Society, Caltech astronomy professor Nick Scoville and his colleagues reported today that the most luminous nearby optical quasar galaxies have massive reservoirs of interstellar gas much like the so-called ultraluminous infrared galaxies (or ULIRGs). The quasar nucleus is powered by accretion on to a massive black hole with mass typically about 100 million times that of the sun while the infrared galaxies are powered by extremely rapid star formation. The ULIRG "starbursts" are believed to result from the high concentration of interstellar gas and dust in the galactic centers.

"Until now, it has been unclear how the starburst and quasar activities are related," Scoville says, "since many optically bright quasars show only low levels of infrared emission which is generally assumed to measure star formation activity.

"The discovery that quasars inhabit gas-rich galaxies goes a long way toward explaining a longstanding problem," Scoville says. "The number of quasars has been observed to increase very strongly from the present back to Redshift 2, at which time the number of quasars was at a maximum.

"The higher number of quasars seen when the universe was younger can now be explained, since a larger fraction of the galaxies at that time had abundant interstellar gas reservoirs. At later times, much of this gas has been used up in forming stars.

"In addition, the rate of merging galaxies was probably much higher, since the universe was smaller and galaxies were closer together."

The new study shows that even optically bright quasar-type galaxies (QSOs) have massive reservoirs of interstellar gas, even without strong infrared emission from the dust clouds associated with star formation activity. Thus, the fueling of the central black hole in the quasars is strongly associated with the presence of an abundant interstellar gas supply.

The Scoville team used the millimeter-wave radio telescope array at Caltech's Owens Valley Radio Observatory near Bishop, California, for an extremely sensitive search for the emission of carbon monoxide (CO) molecules in a complete sample of the 12 nearest and brightest optical quasars previously catalogued at the Palomar 200-inch telescope in the 1970s. In particular, the researchers avoided selecting samples with bright infrared emissions, since that would bias the sample toward those with abundant interstellar dust clouds.

In this optically selected sample, eight out of the 12 quasars exhibited detectable CO emission-implying masses of interstellar molecular clouds in the range of two to 10 billion solar masses. (For reference, the Milky Way galaxy contains approximately two billion solar masses of molecular clouds.) Such large gas masses are found only in gas-rich spiral or colliding galaxies. The present study clearly shows that most quasars are also in gas-rich spiral or interacting galaxies, not gas-poor elliptical galaxies as previously thought.

The new study supports the hypothesis that there exists an evolutionary link between the two most luminous classes of galaxies: merging ultraluminous IR galaxies and ultraviolet/optically bright QSOs. Both the ULIRGs and QSOs show evidence of a recent galactic collision.

The infrared luminous galaxies are most often powered by prodigious starbursts in their galactic centers, forming young stars at 100 to 1,000 times the current rate in the entire Milky Way. The quasars are powered by the accretion of matter into a massive black hole at their nuclei at a rate of one to 10 solar masses per year.

The detection of abundant interstellar gas in the optically selected QSOs suggests a link between these two very different forms of galactic nuclear activity. The same abundant interstellar gases needed to form stars at a high rate might also feed the central black holes.

In normal spiral galaxies like the Milky Way, most of the interstellar molecular gas is in the galactic disk at distances typically 20,000 light-years from the center-well out of reach of a central black hole.

However, during galactic collisions, the interstellar gas can sink and accumulate within the central few hundred light-years, and massive concentrations of interstellar gas and dust are, in fact, seen in the nuclear regions of the ULIRGs. Once in the nucleus, this interstellar matter can both fuel the starburst and feed the central black hole at prodigious rates.

The discovery of molecular gas in the optically selected QSOs that do not have strong infrared emissions suggests that the QSO host galaxies might be similar systems observed at a later time after the starburst activity has subsided, yet with the black hole still being fed by interstellar gas.

For the remaining four quasars where CO was not detected, improved future instrumentation may well yield detections of molecular gas, Scoville says. Even in the detected galaxies the CO emission was extraordinarily faint due to their great distances-typically over a billion light-years. The remaining four galaxies could well have molecular gas masses only a factor of two below those that were detected.

Future instrumentation such as the CARMA and ALMA millimeter arrays will have vastly greater sensitivity, permitting similar studies out to much greater distances.

Other members of the team are David Frayer and Eva Schinnerer, both research scientists at Caltech, Caltech graduate students Micol Christopher and Naveen Reddy and Aaron Evans at SUNY (Stony Brook).

###

Contact:Robert Tindol (626) 395-3631

Writer: 
RT

Biochemical "On/Off" Switch Discovered

PASADENA, Ca.— Proteins are the cell's arbiters. In a complex and still largely mysterious cascade of events, proteins tell a cell when to divide and grow—and when to die. To properly control cell behavior, proteins need to be turned on when they are needed, and turned off when they are not. Now a California Institute of Technology biologist and his colleagues have shed important new light on how this takes place in animals and plants.

In a paper published in the May 18 issue of the journal Science, biologist Raymond Deshaies and his graduate students show that an assemblage of proteins known as CSN may serve as a kind of biochemical on/off switch for other proteins.

In plants, research done in the laboratory of Deshaies's collaborator, Xing-Wang Deng of Yale University, has shown that CSN prevents photomorphogenesis (roughly, the growth of plants controlled by light) when light is absent. CSN is widely distributed in animals as well, but until now no one knew what any of its functions were. Now Deshaies's research shows that CSN may be linked to a recently discovered protein modification known as "neddylation," the physical attachment of a small protein, called NEDD8, to another protein. Neddylation is thought to alter the functioning of whatever protein NEDD8 attaches to. For example, when it attaches to the enzyme SCF (previously discovered by the Deshaies's team), SCF activity increases dramatically. Although the enzymes that attach NEDD8 to proteins like SCF were already known, the enzymes that remove it were not.

Deshaies's team discovered that CSN removes the NEDD8 that is attached to SCF. Based on this finding, they conclude that CSN controls the on-and-off switching of proteins. For example, when NEDD8 is not removed from its partners in plant cells, the plant doesn't respond normally to hormones that control its development.

Many different physiological roles have been proposed for CSN, including roles in the synthesis of new proteins, control of cell division, and control of inflammation. The Deshaies team's finding that CSN acts by removing NEDD8 from other proteins suggests that NEDD8, in turn, is likely to serve as a linchpin in these processes.

Deshaies and his laboratory colleagues are interested in the regulation of cell division, and in identifying the specific functions of various proteins within a cell that participate in this process. The proper regulation of cell division is critical for the normal development of organisms. In animals, aberrations in cell division can have profound consequences; unchecked cell division, for example, can lead to cancer.

Writer: 
MW
Exclude from News Hub: 
No

Environmental Study of Local Area Conducted by Caltech Team

PASADENA, Calif.— California Institute of Technology researchers have received a $100,000 grant from the Alice C. Tyler Perpetual Trust to study the human impact on land and water in the San Gabriel Valley and San Gabriel River watershed. Ecosystems bordering major metropolitan areas are subject to intense pressures from pollutants produced by transportation, industrial activities, power generation, and recreational activities. This project will measure and document these environmental changes in order to predict future impacts.

The research project, "Environmental Quality Near Large Urban Areas," is being coordinated by Janet Hering, associate professor of environmental engineering science at Caltech. Other members of the group include Michael Hoffmann, the James Irvine Professor of Environmental Science; James Randerson, assistant professor of global environmental science; and Paul Wennberg, professor of atmospheric chemistry and environmental engineering science.

The project will also teach Caltech undergraduate students fundamental concepts in environmental chemistry, providing them with practical training and field experience in the collection, measurement, and analysis of human-induced changes on air quality, plants, soil, and water. The training program will allow undergraduates to gain a perspective on the impact of human activities on the atmosphere and biosphere.

The Alice C. Tyler Perpetual Trust was established to contribute to the improvement of the world's environment, including the preservation of all living things, the land, the waters, and the atmosphere.

Contact: Deborah Williams-Hedges (626) 395-3227 debwms@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media ###

Writer: 
DWH
Writer: 

New Analysis of BOOMERANG Data Uncovers Harmonics of Early Universe

Cosmologists from the California Institute of Technology and their international collaborators have discovered the presence of acoustic "notes" in the sound waves that rippled through the early universe.

The existence of these harmonic peaks, discovered in an analysis of images from the BOOMERANG experiment, further strengthens results last year showing that the universe is flat. Also, the new results bolster the theory of "inflation," which states that the universe grew from a tiny subatomic region during a period of violent expansion a split second after the Big Bang.

Finally, the results show promise that another Caltech-based detector, the Cosmic Background Imager (CBI), located in the mountains of Chile, will soon detect even finer detail in the cosmic microwave background. Analysis of this fine detail is thought to be the means of precisely determining how slight fluctuations billions of years ago eventually resulted in the galaxies and stars we see today.

"We were waiting for the other shoe to drop, and this is it," says Andrew Lange, U.S. team leader and a professor of physics at Caltech. Lange was one of a group of cosmologists revealing new results on the cosmic microwave background at the American Physical Society's spring meeting April 29. Other presenters included teams from the DASI and MAXIMA projects.

The new results are from a detailed analysis of high-resolution images obtained by BOOMERANG, which is an acronym for Balloon Observations of Millimetric Extragalactic Radiation and Geophysics. BOOMERANG is an extremely sensitive microwave telescope suspended from a balloon that circumnavigated the Antarctic in late 1998. The balloon carried the telescope at an altitude of almost 37 kilometers (120,000 feet) for 10 and one-half days.

"The key to BOOMERANG's ability to obtain these new images is the marriage of a powerful new detector technology developed at Caltech and the Jet Propulsion Lab with the superb microwave telescope and cryogenic systems developed in Italy at ENEA, IROE/CNR, and La Sapienza," Lange says.

The images were published just one year ago, and the Lange team at the time reported that the results showed the most precise measurements to date of the geometry of space-time. The initial analysis revealed that the single detectable peak represented about a 1-degree expanse, which is precisely the size of large detail predicted by theorists if space-time is indeed flat. Larger peaks would have indicated that the universe is "closed" like a ball, doomed to eventually collapse in on itself, while smaller peaks would have indicated that the universe is "open," or shaped like a saddle, and would expand forever.

Cosmologists believe that the universe was created approximately 12 to 15 billion years ago in an enormous explosion called the Big Bang. The intense heat that filled the embryonic universe is still detectable today as a faint glow of microwave radiation that is visible in all directions. This radiation is known as the cosmic microwave background (CMB). Whatever structures were present in the very early universe would leave their mark imprinted as a very faint pattern of variations in brightness in the CMB.

The CMB was first discovered by a ground-based radio telescope in 1965. Within a few years, Russian and American theorists had independently predicted that the size and amplitude of structures that formed in the early universe would form what mathematicians call a "harmonic series" of structure imprinted on the CMB. Just as the difference in harmonic content allows us to distinguish between a piano and a trumpet playing the same note, so the details of the harmonic content imprinted in the CMB allow us to understand the detailed nature of the universe.

Detection of the predicted features was well beyond the technology available at the time. It was not until 1991 that NASA's COBE (Cosmic Background Explorer) satellite discovered the first evidence for structures of any sort in the CMB.

The BOOMERANG images are the first to bring the CMB into sharp focus. The images reveal hundreds of complex regions that are visible as tiny variations—typically only 100 millionths of a degree (0.0001 C)—in the temperature of the CMB. The new results, released today, show the first evidence for a harmonic series of angular scales on which structure is most pronounced.

The images obtained cover about 3 percent of the sky, generating so much data that new methods had to be invented before it could be thoroughly analyzed. The new analysis provides the most precise measurement to date of several of the parameters which cosmologists use to describe the universe.

The BOOMERANG team plans another campaign to the Antarctic in the near future, this time to map even fainter images encoded in the polarization of the CMB. Though extremely difficult, the scientific payoff of such measurements "promises to be enormous," maintains the U.S team leader of the new effort, John Ruhl, of the University of California at Santa Barbara. "By imaging the polarization, we may be able to look right back to the inflationary epoch itself—right back to the very beginning of time."

Data from the MAXIMA project is also being presented at the American Physical Society meeting, along with data from the CBI, which is also a National Science Foundation-supported mission. The CBI investigators, led by Caltech astronomy professor Tony Readhead, reported early results in the March 1 issue of the Astrophysical Journal. These results were in agreement with the finding of the other projects.

The 36 BOOMERANG team members come from 16 universities and organizations in Canada, Italy, the United Kingdom, and the United States. Primary support for BOOMERANG comes from the Italian Space Agency, Italian Antarctic Research Programme, and the University of Rome "La Sapienza" in Italy; from the Particle Physics and Astronomy Research Council in the United Kingdom; and from the National Science Foundation and NASA in the United States.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Pages

Subscribe to RSS - research_news