Caltech: Combating Future "Bandwidth Bottleneck"

PASADENA, Calif.— The next exciting generation of multimedia on the Internet will be computer images created by three-dimensional geometry. Soon, instead of viewing a simple picture on their computer monitor, a user will be able to view an object from any viewpoint, lighting, or surface. The result will be images that seem to come to life.

It's exciting technology, unless, that is, your personal computer suffers from "bandwidth bottleneck," the frustrating experience that results in the herky-jerky motion of today's so-called streaming video, or audio that breaks up and is hard to hear. Now a solution is in hand for this up coming problem, thanks to a California Institute of Technology professor who's been named a finalist in Discover Magazine's 2001 Innovation Awards.

A team of computer scientists led by Caltech's Peter Schröder, a professor of computer science and applied and computational mathematics, and Wim Sweldens of Lucent Technologies, was acknowledged for developing the most powerful technique to date for computer graphics that will make it practical to send such detailed 3-D data over the Internet. The Innovation Awards are presented annually by Discover, the science publication, and are intended to honor scientists whose groundbreaking work will change the way we live.

Bandwidth bottleneck, a phrase used by Schröder, is caused by too much data being sent through a slow Internet connection to a slow computer. Schröder's expertise is in the mathematical foundations of computer graphics. He and his colleagues developed a compression algorithm—a repetitive computational procedure—that will make it practical to send 3-D images over the Internet, and to manipulate them on personal computers and handhelds.

Without this algorithm, the coming 3-D images would just further bog down the average PC with gobs of data. That's because such 3-D objects describe the actual geometry of an object, such as its depth or height, in detail, with all its measurements. The object could be a human head, or a part for an automobile in an on-line catalog, or a cartoon character. Such digital geometric data is typically acquired by 3-D laser scanning and represents objects using dense meshes with up to millions or even billions of triangles.

The compression challenge is to use the fewest possible bits (the basic unit of information in a digital computing system) to store and transmit these huge and complex sets of data. Efficient geometry compression—delivering the same or higher quality with fewer bits—could unlock the potential of high-end 3-D on consumer systems. The researchers, led by Schröder and Sweldens, report that their technique for geometry compression is up to 12 times better than standard compression methods.

Why this will be exciting for the everyday PC user is the power of 3-D images. "Imagine being able to download a 3-D model of Michelangelo's David to your home computer," says Schröder. "Not only would you see an individual picture, but you could examine in detail the chisel marks on David's cheek, or see what the statue looks like if you stood on a tall ladder."

Today, he says, such riches are reserved for high-end computer users with very high bandwidth Internet connections. "Sometime soon, though," says Schröder, "it can be available to any schoolchild the world over."

Writer: 
MW
Exclude from News Hub: 
No

New research shows that brain is involvedin visual afterimages

If you stare at a bright red disk for a time and then glance away, you'll soon see a green disk of the same size appear and then disappear. The perceived disk is known as an afterimage, and has long been thought to be an effect of the "bleaching" of photochemical pigments or adaptation of neurons in the retina and merely a part of the ocular machinery that makes vision possible.

But a novel new experimental procedure by psychophysicists shows that the brain and its adaptive change are involved in the formation of afterimages.

Reporting in the August 31 issue of the journal Science, a joint team from the California Institute of Technology and NTT Communication Science Laboratories, led by Caltech professor Shinsuke Shimojo, demonstrates that adaptation to a specific visual pattern which induces perception of "color filling-in" later leads to a negative afterimage of the filled-in surface. The research further demonstrates that this global type of afterimage requires adaptation not at the retinal, but rather at the cortical, level of visual neural representation.

The Shimojo team employed a specific type of image (see image A below) in which a red semi-transparent square is perceived on top of the four white disks. Only the wedge parts of the disks are colored, and there is no local stimulus or indication of redness in the central portion of the display, yet the color filling-in mechanism operates to give an impression of filled-in red surface.

If an observer were staring only at the red square for at least 30 seconds, then he or she would see a reverse-color green square for a few seconds after refixating on a blank screen (as in the image at the top of C).

However, an observer who fixates on the image at left (in A) and then refixates on a blank screen will usually see four black disks such as the ones at the bottom of C, followed by a global afterimage in which a green square appears to be solid.

The fact that no light from the center of the original square was red during adaptation demonstrates that the effect was not merely caused by a leaking-over or fuzziness of neural adaptation, because the four white disks are at first clearly distinct as black afterimages. Thus, the global afterimage is distinct from a conventional afterimage.

One possibility is that local afterimages of the disks and wedges—but only these—are induced first, and then the color filling-in occurs to give an impression of the global square, just as in the case of red filling-in during adaptation. The researchers considered this element-adaptation hypothesis, but eventually turned it down.

The other hypothesis is that, since neural circuits employing cortical neurons are known to cause the filling-in of the center of the red square, then perhaps it is this cortical circuit that undergoes adaptation to directly create the global negative green afterimage. This is called the surface-adaptation hypothesis, which was eventually supported by their results.

The researchers came up with experiments to provide three lines of evidence to reject the first and support the second hypothesis. First, the local and the global afterimages were visible with different timing, and tended to be exclusive of each other. This argued against the first hypothesis that the local afterimages are necessary to see the global afterimage.

Second, when the strength of color filling-in during adaptation was manipulated by changing the timing of the presentation of disks and colored wedges, the strength of the global afterimage was positively correlated with it, as predicted by the surface-adaptation hypothesis but not by the element adaptation hypothesis.

For the last piece of evidence, the researchers prepared a dynamic adapting stimulus designed specifically to minimize the local afterimages, yet to maximize the impression of color filling-in during adaptation. If the element-adaptation hypothesis is correct, then test subjects would not observe the global afterimage. If, on the other hand, the surface-adaptation hypothesis is correct, the observers would see a vivid global afterimage only. The result turned out to be the latter.

The study has no immediate applications, but furthers the understanding of perception and the human brain, says Shimojo, a professor of computation and neural systems at Caltech and lead author of the study.

"This has profound implications with regard to how brain activity is responsible for our conscious perception," he says.

According to Shimojo, the brain is the ultimate organ for humans to adapt to the environment, so it would make more sense if the brain, as well as the retina, can modify their activity—and perception as a result—due to experience and adaptation.

The other authors of the paper are Yukiyasu Kamitani, a Caltech graduate student in computation and neural systems, and Shin'ya Nishida of the NTT Communication Science Laboratories in Atsugi, Kanagawa, Japan.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Astronomers detect evidence of time when universe emerged from "Dark Ages"

Astronomers at the California Institute of Technology announced today the discovery of the long-sought "Cosmic Renaissance," the epoch when young galaxies and quasars in the early universe first broke out of the "Dark Ages" that followed the Big Bang.

"It is very exciting," said Caltech astronomy professor S. George Djorgovski, who led the team that made the discovery. "This was one of the key stages in the history of the universe."

According to a generally accepted picture of modern cosmology, the universe started with the Big Bang some 14 billion years ago, and was quickly filled with glowing plasma composed mainly of hydrogen and helium.

As the universe expanded and cooled over the next 300,000 years, the atomic nuclei and electrons combined to make atoms of neutral gas. The glow of this "recombination era" is now observed as the cosmic microwave background radiation, whose studies have led to the recent pathbreaking insights into the geometrical nature of the universe.

The universe then entered the Dark Ages, which lasted about half a billion years, until they were ended by the formation of the first galaxies and quasars. The light from these new objects turned the opaque gas filling the universe into a transparent state again, by splitting the atoms of hydrogen into free electrons and protons. This Cosmic Renaissance is also referred to by cosmologists as the "reionization era," and it signals the birth of the first galaxies in the early universe.

"It is as if the universe was filled by a dark, opaque fog up to that time," explains Sandra Castro, a postdoctoral scholar at Caltech and a member of the team. "Then the fires—the first galaxies—lit up and burned through the fog. They made both the light and the clarity."

The researchers saw the tell-tale signature of the cosmic reionization in the spectra of a very distant quasar, SDSS 1044-0125, discovered last year by the Sloan Digital Sky Survey (SDSS). Quasars are very luminous objects in the distant universe, believed to be powered by massive black holes.

The spectra of the quasar were obtained at the W. M. Keck Observatory's Keck II 10-meter telescope atop Mauna Kea, Hawaii. The spectra show extended dark regions, caused by opaque gas along the line of sight between Earth and the quasar. This effect was predicted in 1965 by James Gunn and Bruce Peterson, both then at Caltech. Gunn, now at Princeton University, is the leader of the Sloan Digital Sky Survey; Peterson is now at Mt. Stromlo and Siding Spring observatories, in Australia.

The process of converting the dark, opaque universe into a transparent, lit-up universe was not instantaneous: it may have lasted tens or even hundreds of millions of years, as the first bright galaxies and quasars were gradually appearing on the scene, the spheres of their illumination growing until they overlapped completely.

"Our data show the trailing end of the reionization era," says Daniel Stern, a staff scientist at the Jet Propulsion Laboratory and a member of the team. "There were opaque regions in the universe back then, interspersed with bubbles of light and transparent gas."

"This is exactly what modern theoretical models predict," Stern added. "But the very start of this process seems to be just outside the range of our data."

Indeed, the Sloan Digital Sky Survey team has recently discovered a couple of even more distant quasars, and has reported in the news media that they, too, see the signature of the reionization era in the spectra obtained at the Keck telescope.

"It is a wonderful confirmation of our result," says Djorgovski. "The SDSS deserves much credit for finding these quasars, which can now be used as probes of the distant universe—and for their independent discovery of the reionization era."

"It is a great example of a synergy of large digital sky surveys, which can discover interesting targets, and their follow-up studies with large telescopes such as the Keck," adds Ashish Mahabal, a postdoctoral scholar at Caltech and a member of the team. "This is the new way of doing observational astronomy: the quasars were found by SDSS, but the discovery of the reionization era was done with the Keck."

The Caltech team's results have been submitted for publication in the Astrophysical Journal Letters, and will appear this Tuesday on the public electronic archive, http://xxx.lanl.gov/list/astro-ph/new.

The W. M. Keck Observatory is a joint venture of Caltech, the University of California, and NASA, and is made possible by a generous gift from the W. M. Keck Foundation.

Writer: 
Robert Tindol
Writer: 

Survival of the Fittest . . . Or the Flattest?

Darwinian dogma states that in the marathon race of evolution, the genotype that replicates the fastest, wins. But now scientists at the California Institute of Technology say that's true, but when you factor in another basic process of evolution, that of mutations, it's often the tortoise that defeats the hare.

It turns out that mutations, the random changes that can take place in a gene, are the wild cards in the great race. The researchers found that at high mutation rates, genotypes with a slower replication rate can displace faster replicators if the former has a higher "robustness"—or fitness—against mutations; that is, if a mutation is, on average, less harmful to the slower replicator than to the faster one. The research, to appear in the July 19th issue of the journal Nature, was conducted by several investigators, including Claus Wilke, a postdoctoral scholar, Chris Adami, who holds joint appointments at Caltech and the Jet Propulsion Lab, Jia Lan Wang, an undergraduate student, Charles Ofria, a former Caltech graduate student now at Michigan State University; and Richard Lenski, a professor at Michigan State.

In a takeoff of a common Darwinian phrase, they coin their work "survival of the flattest" rather than the survival of the fittest. The idea is this: If a group of similar genotypes with a faster replication rate occupies a "high and narrow peak" in the landscape of evolutionary fitness, while a different group of genotypes that replicates more slowly occupies a lower and flatter, or broader, peak, then, when mutation rates are high, the broadness of the lower peak can offset the height of the higher peak. That means the slower replicator wins. " In a way, organisms can trade replication speed for robustness against mutations and vice versa," says Wilke. "Ultimately, the organisms with the most advantageous combination of both will win."

Discerning such evolutionary nuances, though, is no easy task. To test an evolutionary theory requires generations and generations of an organism to pass. To make matters worse, the simplest living system, namely that which has been a precursor to all living systems on Earth, has been replaced by much more complicated systems over the last four billion years.

Wilke and his collaborators found the solution in the growing power of computers by constructing, via a software program, an artificial living system that behaves in remarkably lifelike ways. Such digital creatures evolve in the same way biological life forms do; they live in, and adapt to, a virtual world created for them inside a computer. Doing so offers an opportunity to test generalizations about living systems that may extend beyond the organic life that biologists usually study. Though this research did not involve actual living organisms, one of the authors, Richard Lenski, is a leading expert on the evolution of Escherichia Coli bacteria. Lenski believes that digital organisms are sufficiently realistic to yield biological insights, and he continues his research on both E. coli and digital organisms.

In their digital world, the organisms are self-replicating computer programs that compete with one another for CPU (central processing units) cycles, which are their limiting resource. Digital organisms have genomes in the form of a series of instructions, and phenotypes that are obtained by execution of their genomic program. The creatures physically inhabit a reserved space in the computer's memory—an "artificial Petri dish"—and they must copy their own genomes. Moreover, their evolution does not proceed toward a target specified in advance, but rather proceeds in an open-ended manner to produce phenotypes that are more successful in a particular environment.

Digital creatures lend themselves to evolutionary experiments because their environment can be readily manipulated to examine the importance of various selective pressures. In this study, though, the only environmental factor varied was the mutation rate. Whereas in nature, mutations are random changes that can take place in DNA, a digital organism's mutations occur in the random changes of its particular computer program. A command may be switched, for example, or a sequence of instructions copied twice.

For this study, the scientists derived 40 pairs of digital organisms that were derived from 40 different ancestors in identical selective environments. The only difference was that one of each pair was subjected to a four-fold higher mutation rate. In 12 cases out of the 40, the dominant genotype that evolved at the lower mutation rate replicated at a pace that was 1.5-fold faster than its counterpart at the higher mutation rate.

Next, the scientists allowed each of these 12 disparate pairs to compete across a range of mutation rates. In each case, as the mutation rate was increased, the outcome of competition switched to favor the genotype that had the lower replication rate. The researchers believe that these slower genotypes, although they occupied a lower fitness peak and were located in flatter regions of the fitness surface, were, as a result, more robust with respect to mutations.

The digital organisms have the advantage that many generations can be studied in a brief period of time. But the researchers believe a colony of asexual bacteria, subjected to the same stresses as the digital organisms, would probably face similar consequences.

The concept of "survival of the flattest" seems to imply, the authors say, that, at least for populations subject to a high mutation rate, selection acts upon a group of mutants rather than the individual. Thus, under such circumstances, genotypes that unselfishly produce mutant genotypes of high fitness are selected for, and supported in turn, by other mutants in that group. The study therefore reveals that "selfish genes," while being the successful strategy at low mutation rates, may be outcompeted by unselfish ones when the mutation rate is high.

Up to 6 million votes lost in 2000 presidential election, Voting Technology Project reveals

Though over 100 million Americans went to the polls on election day 2000, as many as 6 million might just have well have spent the day fishing. Researchers at Caltech and MIT call these "lost votes" and think the number of uncounted votes could easily be cut by more than half in the 2004 election with just three simple reforms.

"This study shows that the voting problem is much worse than we expected," said Caltech president David Baltimore, who initiated the nonpartisan study after the November election debacle.

"It is remarkable that we in America put up with a system where as many as six out of every hundred voters are unable to get their vote counted. Twenty-first-century technology should be able to do much better than this," Baltimore said.

According to the comprehensive Caltech-MIT study, faulty and outdated voting technology together with registration problems were largely to blame for many of the 4-to-6 million votes lost during the 2000 election.

With respect to the votes that simply weren't counted, the researchers found that punch-card methods and some direct recording electronic (DRE) voting machines were especially prone to error. Lever machines, optically scanned, and hand-counted paper ballots were somewhat less likely to result in spoiled or "residual" votes. Optical scanning, moreover, was better than lever machines.

As for voter registration problems, lost votes resulted primarily from inadequate registration data available at the polling places, and the widespread absence of provisional ballot methods to allow people to vote when ambiguities could not be resolved at the voting precinct.

 

The three most immediate ways to reduce the number of residual votes would be to:

· replace punch cards, lever machines, and some underperforming electronic machines with optical scanning systems;

· make countywide or even statewide voter registration data available at polling places;

· make provisional ballots available.

The first method, it is estimated, would save up to 1.5 million votes in a presidential election, while the second and third would combine to rescue as many as 2 million votes.

"We could bring about these reforms by spending around $3 per registered voter, at a total cost of about $400 million," says Tom Palfrey, a professor of economics and political science who headed the Caltech effort. "We think the price of these reforms is a small price to pay for insurance against a reprise of November 2000."

Approximately half the cost would go toward equipment upgrades, while the remainder would be used to implement improvements at the precinct level, in order to resolve registration problems on the spot. The $400 million would be a 40 percent increase over the money currently spent annually on election administration in the United States.

In addition to these quick fixes, the report identifies five long-run recommendations.

· First, institute a program of federal matching grants for equipment and registration system upgrades, and for polling-place improvement.

· Second, create an information clearinghouse and data-bank for election equipment and system performance, precinct-level election reporting, recounts, and election finance and administration.

· Third, develop a research grant program to field-test new equipment, develop better ballot designs, and analyze data on election system performance.

· Fourth, set more stringent and more uniform standards on performance and testing.

· Fifth, create an election administration agency, independent of the Federal Election Commission. The agency would be an expanded version of the current Office of Election Administration, and would oversee the grants program, serve as an information clearinghouse and databank, set standards for certification and recertification of equipment, and administer research grants.

The report also proposes a new modular voting architecture that could serve as a model for future voting technology. The Caltech-MIT team concludes that this modular architecture offers greater opportunity for innovation in ballot design and security.

Despite the fact that there is strong pressure to develop Internet voting, the team recommends a go-slow approach in that direction. The prospect of fraud and coercion, as well as hacking and service disruption, led the team to recommend a cautious approach to Internet voting. Also, many Americans are still unfamiliar with the technology.

"The Voting Technology Project is part of a larger effort currently underway—involving many dedicated election officials, researchers, and policy makers—to restore confidence in our election system," commented Steve Ansolabehere, a professor of political science who headed up the MIT team. "We are hopeful that the report will become a valuable resource, and that it will help to bring about real change in the near future."

Baltimore and MIT president Charles Vest announced the study on December 15, two days after the outcome of the presidential election was finally resolved. Funded by a $250,000 grant from the Carnegie Corporation, the study was intended to "minimize the possibility of confusion about how to vote, and offer clear verification of what vote is to be recorded," and "decrease to near zero the probability of miscounting votes."

The report is publicly available on the Caltech-MIT Voting Technology Project Website:

http://vote.caltech.edu

Writer: 
Robert Tindol
Writer: 

Factors causing high mutations could have led to origin of sexual reproduction, study shows

Biologists have long known the advantages of sexual reproduction to the evolution and survival of species. With a little sex, a fledgling creature is more likely to pass on the good mutations it may have, and more able to deal with the sort of environmental adversity that would send its asexual neighbors floundering into the shallow end of the gene pool.

The only problem is that it's hard to figure out how sex got started in the first place. Not only do many primitive single-celled organisms do just fine with asexual reproduction, but mathematical models show that a sexual mutant in an asexual population is most likely not favored to compete successfully and pass on its genes.

Now, researchers from the California Institute of Technology and the Jet Propulsion Laboratory, using "digital organisms" and RNA, have concluded that established asexual bacteria could be nudged to evolve into sexual reproduction if there are certain forms of stress on the environment, such as radiation or catastrophic meteor or comet impacts that give rise to a high rate of mutations.

In an article that has significant implications for understanding the origin of sexual reproduction in the early world, Claus Wilke of Caltech and Chris Adami, who holds joint appointments at Caltech and JPL, report that a change in conditions causing higher rates of mutations can lead an asexual population to an adaptation that may be sufficient to give mutant individuals a greater advantage if those mutants reproduce sexually.

The paper, published in the July 22 issue of the Royal Society journal Proceedings: Biological Sciences B, builds on earlier work by Adami and his collaborators, showing that digital organisms—that is, self-replicating computer programs designed to closely resemble the life cycles of living bacteria—can actually adapt to become more robust.

"What we showed in the other paper," says Adami, "is that if you transfer a fragile organism that evolved with a small mutation rate into a high-mutation-rate environment, it will adapt to this environment by becoming more robust."

One of the reasons the origin of sexual reproduction has been a mystery is because of an effect known as "mutation accumulation." Organisms tend to adapt so as to decrease the effects of mutations in order to become less vulnerable.

But this kind of robustness is poisonous, because with sexual recombination, deleterious mutations would simply accumulate in the organism and thus lead to a gradual loss of genes. This handicap of sexual creatures would be enough to guarantee their extinction when competing against asexual ones.

This can be avoided if the effects of mutations are compounding—that is, if the effect of two or more simultaneous deleterious mutations is worse than the combined effect of each of the mutations. In this manner, an organism may be robust to a few mutations, but incapable of surviving a large number of mutations, so that mutations cannot accumulate.

The new revelation by Wilke and Adami is that there is a conservation law at work in the relationship between the compounding of mutations and the fitness decay due to single mutations. This law says that robustness to a few mutations implies vulnerability to a large number, while robustness to many mutations must go hand in hand with vulnerability to single mutations.

Thus, increasing robustness to single mutations automatically makes multiple mutations intolerable, which removes organisms with multiple deleterious mutations from the population and allows sexual recombination to reap the rewards from sharing beneficial mutations.

Because stressful environments with high mutation rates push organisms to become robust to single mutations, the conservation law guarantees that this evolutionary pressure also pushes asexual organisms on to the road toward sexual recombination.

The researchers studied the evolution of digital organisms and RNA secondary structure, because accurate data on the decay of fitness and the effect of multiple mutations (whether they are compounding or mitigating) for living organisms is quite rare. For the RNA study, the researchers used known sequences with well-understood folds and then tried various mutations to see which mutations mattered and which didn't, in a system that computationally predicts RNA secondary structure. The results supported the conservation law.

Though the study did not involve actual living organisms, Adami has collaborated in the past with experts on bacteria to demonstrate that the digital organisms are indeed realistic. In an earlier 1999 study, for example, Adami's collaborator was a leading expert on the evolution of the E. coli bacteria.

The digital organisms have the advantage that many generations can be studied in a brief period of time, but Adami thinks a colony of asexual bacteria subjected to the stress imposed on the digital organisms in the experiment would probably face similar consequences.

"If you took a population of E. coli and subjected it to high mutation rates for many years—for example by irradiation or introducing mutagenic factors—at some point you might observe that exchange of genetic material, a precursor to sexual recombination, would become favorable to the organisms and thus selected for, if at the same time the environment changes fast enough that enough mutations are beneficial," he says.

"But that's a very difficult experiment with living organisms because of the time involved, and because it is difficult to construct constantly changing environments in a petri dish. This is easier with digital organisms, and will probably be first observed there.

"The reason the origin of sexual reproduction has been such a big mystery is that we look at the world as it is now," Adami says. "But the early world was a much more stressful place, sometimes changing very rapidly.

"We can't say how or when sexual reproduction came to take a hold in nature, but we can now say that high mutation rates can, under the right conditions, force an asexual organism to become sexual."

Adami earned his doctorate in theoretical physics at SUNY Stony Brook. He is a faculty associate in the computation and neural systems department at Caltech, and a research scientist at JPL. He is the author of the 1998 book Introduction to Artificial Life. Wilke, also a physicist, is a postdoctoral fellow in Adami's Digital Life Laboratory.

The article appears in Proceedings: Biological Sciences B, volume 268, number 1475, page 1469. The cover date is 22 July 2001, but the article is available on-line at http://www.pubs.royalsoc.ac.uk/proc_bio/proc_bio.html

Writer: 
RT
Writer: 

Caltech researchers successfully raise obeliskwith kite to test theory about ancient pyramids

When people think about the building of the Egyptian pyramids, they probably have a mental image of thousands of slaves laboriously rolling massive stone blocks with logs and levers. But as one Caltech aeronautics professor is demonstrating, the task may have been accomplished by just four or five guys who flew the stones into place with a kite.

On Saturday, June 23, Mory Gharib and his team raised a 6,900-pound, 15-foot obelisk into vertical position in the desert near Palmdale by using nothing more than a kite, a pulley system, and a support frame. Though the blustery winds were gusting upwards of 22 miles per hour, the team set the obelisk upright on second try.

"It actually lifted up the kite flyer, Eric May, so we had to kill the kite quickly," said Gharib. "But we finished it off the second time."

Emilio Castano Graff, a Caltech undergraduate who tackled the problem under the sponsorship of the Summer Undergraduate Research Fellowship (SURF) program, was also pleased with the results.

"The wind wasn't that great, but basically we're happy with it," he said.

Despite the lack of a steady breeze, the team raised the obelisk in about 25 seconds—so quickly, in fact, that the concrete-and-rebar object was lifted off the ground and swung free for a few seconds. Once the motion had stabilized, the team lowered the obelisk into an upright position.

The next step is to build an even bigger obelisk to demonstrate that even the mammoth 300-ton monuments of ancient Egypt—not to mention the far less massive building blocks of Egypt's 90-odd pyramids—could have been raised with a fraction of the effort that modern researchers have assumed.

Gharib has been working on the project since local business consultant Maureen Clemmons contacted him and his Caltech aeronautics colleagues two years ago. Clemmons had seen a picture in Smithsonian magazine in 1997 of an obelisk being raised, and came up with the idea that the ancient Egyptian builders could have used kites to accomplish the task more easily. All she needed was an aeronautics expert with the proper credentials to field-test her theory.

Clemmons' kite theory was a drastic departure from conventional thinking, which holds that thousands of slaves used little more than brute force and log-rolling to put the stone blocks and obelisks in place. No one has ever come up with a substantially better system for accomplishing the task, and even today the moving of heavy stones would be quite labor-intensive without power equipment.

To demonstrate how little progress was made in the centuries after the age of the pyramids had passed, Gharib points out that, in 1586, the Vatican moved a 330-ton Egyptian obelisk to St. Peter's Square. It is known that lifting the stone into vertical position required 74 horses and 900 men using ropes and pulleys.

It is a credit to Clemmons' determination that the idea is so far along in the testing stage. With no scientific or archaeological training, she has managed to marshal the efforts of family, friends, and other enthusiasts to work on a theory that could well revolutionize the knowledge of ancient engineering practices—and perhaps lead to a reinterpretation of certain ancient symbols as well.

In the course of researching the tools available to the Egyptian pyramid builders, she has discovered, for example, that a brass ankh—long assumed to be merely a religious symbol—makes a very good carabiner for controlling a kite line. And a type of insect commonly found in Egypt could have supplied a kind of shellac to make linen sails hold wind. As for objections to the use of pulleys, the team's intention was always to progress later—actually, "regress" might be a more appropriate word— to the windlasses apparently used to hoist sails on Egyptian ships.

"The whole approach has been to downgrade the technology," Gharib says. "We first wanted to show that a kite could raise a huge weight at all. Now that we're raising larger and larger stones, we're also preparing to replace the steel scaffolding with wooden poles and the steel pulleys with wooden pulleys like the ones they may have used on Egyptian ships.

For Gharib, the idea of accomplishing heavy tasks with limited manpower is appealing from an engineer's standpoint because it makes more logistical sense.

"You can imagine how hard it is to coordinate the activities of hundreds if not thousands of laborers to accomplish an intricate task," says Gharib. "It's one thing to send thousands of soldiers to attack another army on a battlefield. But an engineering project requires everything to be put precisely into place.

"I prefer to think of the technology as simple, with relatively few people involved."

The concept Gharib has developed with Graff is to build a simple structure around the obelisk with a pulley system mounted somewhat forward of the stone. That way, the base of the obelisk will drag the ground for a few feet as the kite lifts the stone, and the stone will then be quite stable once it has been pulled up to a vertical position. If the obelisk were raised with the base as a pivot, the stone would tend to swing past the vertical position and fall the other way.

The top of the obelisk is tied with ropes threaded through the pulleys and attached to the kite. A couple of workers guide the operation with ropes attached to the pulleys.

Of course, no one has any idea if the ancient Egyptians actually moved stones or anything else with kites and pulleys, but Clemmons has found some tantalizing hints that the project is on the right track. On a building frieze now displayed in a Cairo museum, there is a wing pattern in bas relief that does not resemble any living bird. Directly below are several men standing near vertical objects that could be ropes.

Gharib's interest is not necessarily to make archeological contributions, but to demonstrate that the technique is viable.

"We're not Egyptologists," he says. "We're mainly interested in determining whether there is a possibility that the Egyptians were aware of wind power, and whether they used it to make their lives better."

Now that Gharib and his team have successfully raised the four-ton concrete obelisk with everyone watching, they will proceed to a 10-ton stone, then perhaps to 20 tons. Eventually they hope to receive permission to raise one of the obelisks that still lies in an Egyptian quarry.

"In fact, we may not even need a kite. It could be we can get along with just a drag chute."

Finally, one might ask whether there was and is sufficient wind in Egypt for a kite or a drag chute to fly. The answer is that steady winds of up to 30 miles-per-hour are not unusual in the areas where the pyramids and obelisks are found.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT
Writer: 

Hensen's node in chicken embryos governs movement of neural cells, study shows

For us living creatures with backbones, existence begins as a single fertilized cell that then subdivides and grows into a fetus with many, many cells. But the details of how those cells end up as discrete organs instead of undifferentiated heaps of cells is only now being understood in microscopic detail.

Why, for example, should some of the cells migrate to the region that will become the brain, while others travel netherward to make a spinal cord? Although some details are known about which cells contribute to particular regions of the nervous system and which signals help to establish the organization of the brain, much less is known about factors that guide the development of the spinal cord.

In a new study, researchers from the California Institute of Technology have gained unprecedented information about the molecular signals and cell movements that coordinate to form the spinal cord. The study takes advantage of recently developed bioimaging and cell labeling techniques to follow individual cell movements in a developing chick embryo through a clear "window" cut into a fertilized egg. The results, reported in the June issue of the journal Nature Cell Biology, suggest that a proliferative stem zone at the tail end of the growing embryo contributes descendants to the growing neuraxis.

"The basic idea is that descendants of cells from Hensen's node, the structure that lays down the trunk, are sequentially distributed along the elongating spinal cord" says Luc Mathis, a former researcher in the lab of Caltech biology professor Scott Fraser, and lead author of the paper. "In the past, we did not have the ability to follow individual cells in living vertebrate embryos and could not determine how neural precursor cells could remain within Hensen's node, while some descendants leave it to form the spinal cord. "

In the paper, the researchers explain that neural precursor cells get displaced into the neural axis by the proliferation in Hensen's node. The researchers labeled cells near Hensen's node in 40-hour old chick embryos by using an external electric field to deliver an expression vector encoding green fluorescent protein (GFP) into cells, a process called electroporation. Using state-of-the-art imaging techniques developed by postdoctoral researcher Paul Kulesa, the group recorded the motion of fluorescent cells in ovo using a confocal microscope set up for time-lapse imaging and surrounded by a heated chamber to maintain embryo development.

"As the cells proliferate, some progenitors are displaced from the stem zone to become part of the neural plate and spinal cord," Mathis says. "Our analyses show that the Hensen's node produces daughter cells that are eventually displaced out of the node zone on the basis of their position in relation to other proliferating cells, and not on the basis of asymmetric cell divisions."

The paper also addresses the molecular signaling involved in the spreading of the cells. Previous work has shown that fibroblast growth factor (FGF) is somehow involved in formation of the posterior nervous system. To test the possibility that FGF could act by maintaining the stem zone of cell proliferation, the researchers disrupted FGF signaling within Hensen's node. Indeed, the result was a seriously shortened spinal cord and premature exit of cells from the node, indicating that FGF is required for the proliferation of neural precursor cells in the stem zone that generates the spinal cord.

A structure similar to Hensen's node—called simply a "node"—is found in mammals, and analogous zones are found in other vertebrates as well. The cell behavior and genetic control discovered in the chick might also be responsible for the development of the spinal cord in mammals, including humans.

"This new understanding of the formation of the spinal cord is the result of a fusion between hypotheses that arose during previous studies that I had conducted in France, the great embryological background and imaging facilities provided by Scott Fraser, and the original experimental systems of cell tracking developed by Paul Kulesa" concludes Mathis."

Scott Fraser is the Anna L Rosen Professor of Biology and the director of the Biological Imaging Center of Caltech's Beckman Institute. Luc Mathis is a former researcher at the Biological Imaging Center who is currently at the Pasteur Institute in Paris. Paul Kulesa is a senior research fellow supported by the computational molecular biology progam and associated with the Biological Imaging Center.

Contact: Robert Tindol (626) 395-3631

Writer: 
RT

Caltech Uses Fluorescent Protein to Visualize the Work of Living Neurons

Neuroscientists have long suspected that dendrites—the fine fibers that extend from neurons—can synthesize proteins. Now, using a molecule they constructed that "lights up" when synthesis occurs, a biologist and her colleagues from the California Institute of Technology have proven just that.

Erin M. Schuman, an associate professor of biology at Caltech and an assistant investigator with the Howard Hughes Medical Institute, along with colleagues Girish Aakalu, Bryan Smith, Nhien Nguyen, and Changan Jiang, published their findings last month in the journal Neuron. Proving that protein synthesis does indeed occur in intact dendrites suggests the dendrites may also have the capacity to adjust the strength of connections between neurons. That in turn implies they may influence vital neural activities such as learning and memory.

Schuman and colleagues constructed a so-called "reporter" molecule that, when introduced into neurons, emits a telltale glow if protein synthesis is occurring. "There was early evidence that protein-synthesis machinery was present in dendrites," says Schuman. "Those findings were intriguing because they implied that dendrites had the capacity to make their own proteins."

The idea that dendrites should be able to synthesize proteins made sense to Schuman and others because it was more economical and efficient. "It's like the difference between centralized and distributed freight shipping," she says. "With central shipping, you need a huge number of trucks that drive all over town, moving freight from a central factory. But with distributed shipping, you have multiple distribution centers that serve local populations, with far less transport involved."

Previous studies had indicated that, in test tubes, tiny fragments of dendrites still had the capacity to synthesize proteins. Schuman and her colleagues believed that visualizing local protein synthesis in living neurons would provide a more compelling picture than was currently available.

The scientists began their efforts to create a reporter molecule by flanking a gene for a green fluorescent protein with two segments of another gene for a particular enzyme. Doing this ensured that the researchers would target the messenger RNA (mRNA) for their reporter molecule to dendrites.

Next, in a series of experiments, the group inserted the reporter molecule into rat neurons in culture, and then triggered protein synthesis using a growth factor called BDNF. By imaging the neurons over time, the investigators showed that the green fluorescent protein was expressed in the dendrites following BDNF treatment—proof that protein synthesis was taking place. Going a step further, the researchers showed they could cause the fluorescence to disappear by treating the neurons with a drug that blocked protein synthesis.

Schuman and her colleagues also addressed whether proteins synthesized in the main cell body, called the soma, could have diffused to the dendrites, rather than the dendrites themselves performing the protein synthesis. The researchers proved the proteins weren't coming from the soma by simply snipping the dendrites from the neurons, while maintaining their connection to their synaptic partners. Sure enough, the isolated dendrites still exhibited protein synthesis.

Intriguingly, says Schuman, hot spots of protein synthesis were observed within the dendrites. By tracking the location of the fluorescent signal over time, the researchers could see that these hotspots waxed and waned consistently in the same place. "The main attraction of local protein synthesis is that it could endow synapses with the capacity to make synapse-specific changes, which is a key property of information-storing systems," says Schuman. "The observation of such hot spots suggests there are localized areas of protein synthesis near synapses that may provide new proteins to synapses nearby."

Schuman and her colleagues are now applying their reporter molecule system to more complex brain slices and whole mice. "In the whole animals, we're exploring the role of dendritic protein synthesis in information processing and animal learning and behavior," says Schuman.

Writer: 
MW
Writer: 

Brightest Quasars Inhabit Galaxies withStar-Forming Gas Clouds, Scientists Discover

A team of scientists at the California Institute of Technology and the State University of New York at Stony Brook has found strong evidence that high-luminosity quasar activity in galaxy nuclei is linked to the presence of abundant interstellar gas and high rates of star formation.

In a presentation at the summer meeting of the American Astronomical Society, Caltech astronomy professor Nick Scoville and his colleagues reported today that the most luminous nearby optical quasar galaxies have massive reservoirs of interstellar gas much like the so-called ultraluminous infrared galaxies (or ULIRGs). The quasar nucleus is powered by accretion on to a massive black hole with mass typically about 100 million times that of the sun while the infrared galaxies are powered by extremely rapid star formation. The ULIRG "starbursts" are believed to result from the high concentration of interstellar gas and dust in the galactic centers.

"Until now, it has been unclear how the starburst and quasar activities are related," Scoville says, "since many optically bright quasars show only low levels of infrared emission which is generally assumed to measure star formation activity.

"The discovery that quasars inhabit gas-rich galaxies goes a long way toward explaining a longstanding problem," Scoville says. "The number of quasars has been observed to increase very strongly from the present back to Redshift 2, at which time the number of quasars was at a maximum.

"The higher number of quasars seen when the universe was younger can now be explained, since a larger fraction of the galaxies at that time had abundant interstellar gas reservoirs. At later times, much of this gas has been used up in forming stars.

"In addition, the rate of merging galaxies was probably much higher, since the universe was smaller and galaxies were closer together."

The new study shows that even optically bright quasar-type galaxies (QSOs) have massive reservoirs of interstellar gas, even without strong infrared emission from the dust clouds associated with star formation activity. Thus, the fueling of the central black hole in the quasars is strongly associated with the presence of an abundant interstellar gas supply.

The Scoville team used the millimeter-wave radio telescope array at Caltech's Owens Valley Radio Observatory near Bishop, California, for an extremely sensitive search for the emission of carbon monoxide (CO) molecules in a complete sample of the 12 nearest and brightest optical quasars previously catalogued at the Palomar 200-inch telescope in the 1970s. In particular, the researchers avoided selecting samples with bright infrared emissions, since that would bias the sample toward those with abundant interstellar dust clouds.

In this optically selected sample, eight out of the 12 quasars exhibited detectable CO emission-implying masses of interstellar molecular clouds in the range of two to 10 billion solar masses. (For reference, the Milky Way galaxy contains approximately two billion solar masses of molecular clouds.) Such large gas masses are found only in gas-rich spiral or colliding galaxies. The present study clearly shows that most quasars are also in gas-rich spiral or interacting galaxies, not gas-poor elliptical galaxies as previously thought.

The new study supports the hypothesis that there exists an evolutionary link between the two most luminous classes of galaxies: merging ultraluminous IR galaxies and ultraviolet/optically bright QSOs. Both the ULIRGs and QSOs show evidence of a recent galactic collision.

The infrared luminous galaxies are most often powered by prodigious starbursts in their galactic centers, forming young stars at 100 to 1,000 times the current rate in the entire Milky Way. The quasars are powered by the accretion of matter into a massive black hole at their nuclei at a rate of one to 10 solar masses per year.

The detection of abundant interstellar gas in the optically selected QSOs suggests a link between these two very different forms of galactic nuclear activity. The same abundant interstellar gases needed to form stars at a high rate might also feed the central black holes.

In normal spiral galaxies like the Milky Way, most of the interstellar molecular gas is in the galactic disk at distances typically 20,000 light-years from the center-well out of reach of a central black hole.

However, during galactic collisions, the interstellar gas can sink and accumulate within the central few hundred light-years, and massive concentrations of interstellar gas and dust are, in fact, seen in the nuclear regions of the ULIRGs. Once in the nucleus, this interstellar matter can both fuel the starburst and feed the central black hole at prodigious rates.

The discovery of molecular gas in the optically selected QSOs that do not have strong infrared emissions suggests that the QSO host galaxies might be similar systems observed at a later time after the starburst activity has subsided, yet with the black hole still being fed by interstellar gas.

For the remaining four quasars where CO was not detected, improved future instrumentation may well yield detections of molecular gas, Scoville says. Even in the detected galaxies the CO emission was extraordinarily faint due to their great distances-typically over a billion light-years. The remaining four galaxies could well have molecular gas masses only a factor of two below those that were detected.

Future instrumentation such as the CARMA and ALMA millimeter arrays will have vastly greater sensitivity, permitting similar studies out to much greater distances.

Other members of the team are David Frayer and Eva Schinnerer, both research scientists at Caltech, Caltech graduate students Micol Christopher and Naveen Reddy and Aaron Evans at SUNY (Stony Brook).

###

Contact:Robert Tindol (626) 395-3631

Writer: 
RT

Pages

Subscribe to RSS - research_news