Geobiologists create novel method for studying ancient life forms

PASADENA, Calif.--Geobiologists are announcing today their first major success in using a novel method of "growing" bacteria-infested rocks in order to study early life forms. The research could be a significant tool for use in better understanding the history of life on Earth, and perhaps could also be useful in astrobiology.

Reporting in the August 23 edition of the journal Geology, California Institute of Technology geobiology graduate student Tanja Bosak and her coauthors describe their success in growing calcite crusts in the presence and absence of a certain bacterium in order to show that tiny pores found in such rocks can be definitively attributed to microbial presence. Micropores have long been known to exist in certain types of carbonate rocks that built up in the oceans millions of years ago, but researchers have never been able to say much more than that the pores were likely caused by microbes.

The new results show that there is a definite link between microbes and micropores. In the experiment, Bosak and her colleagues grew a bacterium known as Desulfovibrio desulfuricans in a supply of nutrients, calcium, and bicarbonate that built up just like a carbonate deposit in the ancient oceans. The mix that contained the bacteria tended to form rock with micropores in recognizable patterns, while the "sterile" mix did not.

"Ours is a very reductionist approach," says Dianne Newman, the Clare Boothe Luce Assistant Professor of Geobiology and Environmental Science and Engineering at Caltech and a coauthor of the paper. "This work shows that you can study a single species to see how it behaves in a controlled environment, and from that draw conclusions that apply to the rock record. The counterpart is to go to nature and infer what's going on in a system you can't control."

"We were primarily interested in directly observing how the microbes disrupt the crystal growth of the carbonate rocks," adds Bosak. In essence, the microbes are large enough to displace a bit of "real estate" with their bodies, resulting in a tiny cavity that is left behind in the permanent record. The micropores in the study tend to be present throughout the crystals, and they not only mirror the shape and size of the bacteria, but also tend to form characteristic swirling patterns. If the micropores had been formed by some kind of nonliving particles, the patterns would likely not be present.

The next step in the research is to run the growth experiments with photosynthetic microbes. The information could help scientists determine which shapes found in certain types of rocks can be used as evidence of early life on Earth. In the future, the information could also be used to study samples from other rocky planets and moons for evidence of primitive life.

Primarily, however, Newman says the technique will be of immediate benefit in studying Earth. "If you really want to look at life billions of years ago, in the Precambrian, you need to study microbial life.

"Even today the diversity of life is predominantly microbial," Newman adds, "so if we expand our perspective of what life is beyond macroscopic organisms, it's clear that microbes have been the dominant life form throughout Earth history."

In addition to Bosak and Newman, the other authors of the paper are Frank Corsetti of USC's department of earth sciences, and Virginia Souza-Egipsy of USC and the Center of Astrobiology in Madrid, Spain.

The paper is titled "Micron-scale porosity as a biosignature in carbonate crusts," and is available online at




Robert Tindol

Chemists at Caltech devise new, simpler wayto make carbohydrates

PASADENA, Calif.--Chemists at the California Institute of Technology have succeeded in devising a new method for building carbohydrate molecules in a simple and straightforward way that requires very few steps. The new synthesis strategy should be of benefit to scientists in the areas of chemistry and biology and in the pharmaceutical industry.

In an article published online August 12 by the journal Science on the Science Express Website, Caltech chemistry professor David MacMillan and his graduate student Alan Northrup describe their new method of making carbohydrates in two steps. This is a major improvement over current methods, which can require up to a dozen chemical steps.

"The issue with carbohydrate utilization is that, for the last 100 years, scientists have needed many chemical reactions to differentiate five of the six oxygen atoms present in the carbohydrate structure," explains MacMillan, a specialist in organic synthesis. "We simplified this to two steps by the invention of two new chemical reactions that are based on an old but powerful chemical transformation known as the aldol reaction. Furthermore, we have devised methods to selectively build oxygen differentiated glucose, mannose, or allose in just two chemical steps."

MacMillan has also demonstrated that this new method for carbohydrate synthesis allows easy access to unnatural carbohydrates for use in medicinal chemistry and glycobiology as well as in a number of diagnostic techniques. One application involves a rare form or carbon known as carbon-13, which is easier to identify with magnetism-based analytical methods.

By using the readily available and inexpensive 13C-labeled form of ethylene glycol, MacMillan and Northrup have been able to construct the all-13C-labeled versions of carbohydrates in only four chemical steps. For comparison, the previous total synthesis of this all-13C-labeled carbohydrate was accomplished in 44 chemical steps.

"Carbohydrates are essential to human biology, playing key roles in everything from our growth and development to our immune system and brain functions," says John Schwab, a chemist at the National Institute of General Medical Sciences, which supported the research. "They also play critical roles in plants, bacteria, and viruses, where they have huge implications for human health. But because they are so difficult to work with, carbohydrates are not nearly as well understood as DNA and proteins.

"MacMillan's technique will allow scientists to more easily synthesize and study carbohydrates, paving the way for a deeper understanding of these molecules, which in turn may lead to new classes of drugs and diagnostic tools," Schwab adds.

"One of the central goals of chemical synthesis is to design new ways to build molecules that will greatly benefit other scientific fields and ultimately society as a whole," MacMillan says. "We think that this new chemical sequence will help toward this goal; however, there is a bounty of new chemical reactions that are simply waiting to be discovered that will greatly impact many other areas of research in the biological and physical sciences."

The title of the paper is "Two Step Synthesis of Carbohydrates by Selective Aldol Reactions." The paper will be published in the journal Science at a later date.


Fish, Frog, and Fly Share a Molecular Mechanism to Control Embryonic Growth

PASADENA, Calif. — Oriented cell division is a fundamental process in developing organisms, whether you are a worm, a fruit fly--or a human. As an embryo begins to grow, cells divide again and again, from the single fertilized egg to the countless cells present at birth. These cell divisions are not haphazard; instead, they are often precisely oriented, playing an important role in building an embryo of the right size and shape, and with the right body "parts"--the control of cell division also plays a central role in placing cells in the proper positions to build organs that will contain the correct cell types.

The orientation of cell divisions has been well studied in invertebrates, especially in Caenorhabditis elegans (worm) and Drosophila melanogaster (fruit fly), but relatively little has been known about oriented cell division in vertebrates. Now for the first time, researchers at the California Institute of Technology report that the molecular machinery that underlies oriented cell division in invertebrates serves a similar but twofold purpose in the development of the vertebrate embryo. For one, it is responsible for orienting cell division, or mitosis. For another, it's responsibile for the movements that elongate the round egg into the vertebrate body plan; that is, the shape of the particular animal. The research appears in the August 5 edition of the journal Nature (

The researchers are recent graduates Ying Gong '04 and Chunhui Mo '03, working with Scott Fraser, the Anna L. Rosen Professor of Biology and director of the Biological Imaging Center. Using the zebrafish, a card-carrying vertebrate, as their animal model, the researchers first marked certain cells with fluorescent proteins. Then, using a four-dimensional confocal microscope, they were able to follow the motions of these cells in real time, as the body plan of the zebrafish took shape during development, or gastrulation. The researchers found that cells in dorsal tissue divide in an oriented fashion, with one of the two daughter cells from each division moving towards the head, and the other towards the future tail. They were able to determine that such oriented cell division is a major driving force for the extension of the body axis--the growth of the embryo into the animal's final shape.

By combining their advanced imaging tools with molecular biological techniques, the researchers were able to show that the driving force for these oriented divisions is the Wnt "pathway," a ubiquitous cascade of specific proteins that trigger cellular function. Research over the past decade has shown that the Wnt pathway controls the patterns, fates, and movements of cells in both vertebrates and invertebrates. One major branch of this biochemical communication network is the planar cell polarity (PCP) pathway. In previous work from the Fraser lab and their collaborators, the PCP pathway has been shown to guide the tissue motions that convert the spherical frog embryo into the familiar shape of the elongated tadpole. This is a key process in the life of the frog, termed convergent extension. Each cell attempts to "elbow between" the row of cells to its left or its right. "This simple motion has a profound effect on the length and width of the embryo," says Fraser; "think of a band marching shoulder to shoulder on a football field. If half of the rows of marchers merged with the adjacent row, the band would be half as wide and twice as long."

The trio of researchers explored the effects in fish embryos of altering the many proteins in the Wnt-PCP signaling pathway, including some of the potential signals and co-receptors (proteins called Silberblick/Wnt11, Dishevelled, and Strabismus). They were expecting to see an alteration in the convergent-extension motions. Instead, what they found was a major alteration in the orientation of cell division. When they blocked the Wnt pathway, cell division did not take place along the head-tail axis, but randomly. In normal fish embryos, the oriented divisions lengthened the body axis by nearly twofold. With randomization, though, a short and squat embryo was created.

Given that the same PCP pathway is involved in controlling cell division in the invertebrates, C. elegans and D. melanogaster, and the vertebrate zebrafish, the results suggest that the pathway has an evolutionary conserved role. That is, that across a wide variety of animal species, such pathways share a common function, perhaps reflecting a common origin in the biological past.

"The amazing thing about these studies is that they show that the many varied mechanisms that can create the long and narrow body plan of a fish, frog, or fly come under a common molecular control mechanism," Fraser says. "Work in frog embryos from John Wallingford (formerly of UC Berkeley, currently at University of Texas, Austin) and Richard Harland (UC Berkeley) have established a link between these motions and neural tube defects (such as craniorachischisis and spina bifida). Our new experiments have already prompted a new round of collaborative experiments to determine if the same molecular pathway controls convergent extension, cell division, or both in mammals. The answers to these questions promise new insights into the underlying cause for some of the devastating birth defects seen in humans. "

MEDIA CONTACT: Mark Wheeler (626) 395-8733

Visit the Caltech media relations web site:


Gamma-ray burst of December 3 was a new type of cosmic explosion

PASADENA, Calif.—Astronomers have identified a new class of cosmic explosions that are more powerful than supernovae but considerably weaker than most gamma-ray bursts. The discovery strongly suggests a continuum between the two previously-known classes of explosions.

In this week's issue of Nature, astronomers from the Space Research Institute of the Russian Academy of Sciences and the California Institute of Technology announce in two related papers the discovery of the explosion, which was first detected on December 3, 2003, by the European-Russian Integral satellite and then observed in detail at ground-based radio and optical observatories. The burst, known by its birthdate, GRB031203, appeared in the constellation Puppis and is about 1.6 billion light-years away.

Although the burst was the closest gamma-ray burst to Earth ever studied (all the others have been several billion light-years away), researchers noticed that the explosion was extremely faint--releasing only about one-thousandth of the gamma rays of a typical gamma-ray burst. However, the burst was also much brighter than supernovae explosions, which led to the conclusion that a new type of explosion had been found.

Both supernovae and the rare but brilliant gamma-ray bursts are cosmic explosions marking the deaths of massive stars. Astronomers have long wondered what causes the seemingly dramatic differences between these events. The question of how stars die is currently a major focus of stellar research, and is particularly directed toward the energetic explosions that destroy a star in one cataclysmic event.

Stars are powered by the fusion ("burning'') of hydrogen in their interiors. Upon exhaustion of fuel in the interior, the core of massive stars collapse to compact objects--typically a neutron star and occasionally a black hole. The energy released as a result of the collapse explodes the outer layers, the visible manifestation of which is a supernova. In this process, new elements are added to the inventory of matter in the universe.

However, this nuclear energy may be insufficient to power the supernova explosions. One theory is that additional energy is generated from the matter falling onto the newly produced black hole. Many astronomers believe that this is what powers the luminous gamma-ray bursts.

But the connection between such extreme events and the more common supernovae is not yet clear, and if they are indeed closely related, then there should be a continuum of cosmic explosions, ranging in energy from that of "ordinary" supernovae to that of gamma-ray bursts.

In 1998, astronomers discovered an extremely faint gamma-ray burst, GRB 980425, coincident with a nearby luminous supernova. The supernova, SN 1998bw, also showed evidence for an underlying engine, albeit a very weak one. The question that arose was whether the event, GRB 980425/SN 1998bw, was a "freak" explosion or whether it was indicative of a larger population of low-powered cosmic explosions with characteristics in between the cosmological gamma-ray bursts and typical supernovae.

"I knew this was an exciting find because even though this was the nearest gamma-ray burst to date, the gamma-ray energy measured by Integral is one thousand times fainter than typical cosmological gamma-ray bursts," says Sergey Sazonov of the Space Research Institute, the first author of one of the two Nature papers.

The event was studied in further detail by the Chandra X-Ray Observatory and the Very Large Array, a radio telescope facility located in New Mexico.

"I was stunned that my observations from the Very Large Array showed that this event confirmed the existence of a new class of bursts," says graduate student Alicia Soderberg, who is the principal author of the other Nature paper. "It was like hitting the jackpot."

There are several exciting implications of this discovery, including the possible existence of a significant new population of low-luminosity gamma-ray bursts lurking within the nearby universe, said Shrinivas Kulkarni, who is the MacArthur Professor of Astronomy and Planetary Science at Caltech and Soderberg's faculty adviser.

"This is an intriguing discovery," says Kulkarni. "I expect a treasure trove of such events to be identified by NASA's Swift mission scheduled to be launched this fall from Cape Canaveral. I am convinced that further discoveries and studies of this new class of hybrid events will forward our understanding of the death of massive stars."



New Caltech Center Receives $8 Million for Research on New Types of Optical Devices

PASADENA, Calif.—The Defense Advanced Research Projects Agency (DARPA) has awarded an $8 million, four-year, basic-research program grant to the California Institute of Technology to initiate research in photonics technologies. The technical focus of the effort will be on optofluidics, an exciting new research area based on the use of microfluidic devices to control optical processes, and which is expected to result in a new generation of small-scale, highly adaptable, and innovative optical devices.

To conduct the research, Caltech is establishing a new center called the Center for Optofluidic Integration. The center will spearhead efforts directed toward a new class of adaptive optical devices for numerous applications in sensing, switching, and communications.

According to founding director Demetri Psaltis, the DARPA-funded center is ideally located at Caltech because the Institute has a longstanding commitment to interdisciplinary research, faculty interaction, and the creation of new technologies and avenues of knowledge. The center will also draw on the efforts of researchers at other institutions, including Harvard University and UC San Diego.

"The basic idea of the center is to build optical devices for imaging, fiber optics, communications, and other applications, and to transcend the limitations of optical devices made out of traditional materials like glass," explains Psaltis, who is the Myers Professor of Electrical Engineering and an expert in advanced optical devices. "A glass lens, for example, is relatively unchangeable optically. Our idea is to use fluidics as a means of modifying optics."

This can be accomplished, Psaltis says, by taking advantage of recent advances at Caltech, Harvard, and UC San Diego in microfluidics, soft lithography, and nanophotonics. The fusion of these three technologies will be the key to developing components that use nanometer-sized fluidic pathways to mix and pump liquids into and out of the optical path.

Among other advantages, this approach allows for the construction of devices with optical properties that can be altered very quickly. The potential products of this line of research include adaptive graded index optics, dye lasers on silicon chips, nanostructured optical memories, dynamic nonlinear optical devices, reconfigurable optical switches, and ultrasensitive molecular detectors. Optofluidics is expected to have a broad impact on areas such as telecommunications, biophotonics and biomedical engineering, and robot and machine vision.

The new center will function as a catalyst to facilitate the technology fusion process. One of the more noticeable effects of the center on the Caltech campus will be the creation of a microfluidics foundry to create optofluidic technologies. In the foundry, researchers will be able to easily design and rapidly create the microfluidic layers that will control the flow of liquids to these new devices.

According to Psaltis, the initial members of the center's research team all offer significant expertise in areas critical to the design and fabrication of highly integrated optofluidic devices. Others at Caltech include Stephen Quake, the Everhart Professor of Applied Physics and Physics, who has invented a number of microfluidic devices for biomedicine applications; Kerry Vahala, the Jenkins Professor of Information Science and Technology and a professor of applied physics, who is the inventor of optical devices such as high-quality optical microcavities; Axel Scherer, the Neches Professor of Electrical Engineering, Applied Physics, and Physics, who is best known for his work on photonic band gap devices, and who collaborated with Psaltis on the successful development of the first photonic crystal laser tunable by fluid insertion; Changhuei Yang, an assistant professor of electrical engineering and expert in biophotonics; and Oskar Painter, an assistant professor of applied physics with a background in photonic crystal lasers. Researchers at other institutions include George Whitesides, the Woodford L. and Ann A. Flowers University Professor at Harvard, who is a pioneer in soft lithography; Federico Capasso, the Robert L. Wallace Professor of Applied Physics at Harvard, who developed quantum cascade lasers; and Shaya Fainman, a professor of electrical and computer engineering at UC San Diego, whose expertise is in nanophotonics.

Robert Tindol

New Class of Reagents Developed by Caltech Chemical Biologists for In Vivo Protein Tracking

PASADENA, Calif.--One of the big problems in biology is keeping track of the proteins a cell makes, without having to kill the cell. Now, researchers from the California Institute of Technology have developed a general approach that measures protein production in living cells.

Reporting in the July 26 issue of the journal Chemistry and Biology, Caltech chemistry professor Richard Roberts and his collaborators describe their new method for examining "protein expression in vivo that does not require transfection, radiolabeling, or the prior choice of a candidate gene." According to Roberts, this work should have great impact on both cell biology and the new field of proteomics, which is the study of all the proteins that act in living systems.

"This work is a result of chemical biology—chemists, and biologists working together to gain new insights into a huge variety of applications, including cancer research and drug discovery," says Roberts.

"Generally, there is a lack of methods to determine if proteins are made in response to some cellular stimuli and what those specific proteins are," Roberts says. "These are two absolutely critical questions, because the behavior of a living cell is due to the cast of protein characters that the cell makes."

Facing this problem, the Roberts team tried to envision new methods that would enable them to decipher both how much and what particular protein a cell chooses to make at any given time. They devised a plan to trick the normal cellular machinery into labeling each newly made protein with a fluorescent tag.

The result is that cells actively making protein glow brightly on a microscope slide, much like a luminescent Frisbee on a dark summer night. Importantly, these tools can also be used to determine which particular protein is being made, in much the same way that a bar code identifies items at a supermarket checkout stand.

To demonstrate this method, the team used mouse white blood cells that are very similar to cells in the human immune system. These cells could be tagged to glow various colors, and the tagged proteins later separated for identification.

Over the next decade, scientists hope to better understand the 30,000 to 40,000 different proteins inside human cells. The authors say they are hopeful that this new approach will provide critical information for achieving that goal.

The title of the paper is "A General Approach to Detect Protein Expression In Vivo Using Fluorescent Puromycin Conjugates." For more information, contact Heidi Hardman at

Robert Tindol

San Andreas Earthquakes Have Almost Always Been Big Ones, Paleoseismologists Discover

PASADENA, Calif.—A common-sense notion among many Californians is that frequent small earthquakes allow a fault to slowly relieve accumulating strain, thereby making large earthquakes less likely. New research suggests that this is not the case for a long stretch of the San Andreas fault in Southern California.

In a study appearing in the current issue of the journal Geology, researchers report that about 95 percent of the slippage at a site on the San Andreas fault northwest of Los Angeles occurs in big earthquakes. By literally digging into the fault to look for information about earthquakes of the past couple of millennia, the researchers have found that most of the motion along this stretch of the San Andreas fault occurs during rare but large earthquakes.

"So much for any notion that the section of the San Andreas nearest Los Angeles might relieve its stored strains by a flurry of hundreds of small earthquakes!" said Kerry Sieh, a geology professor at the California Institute of Technology and one of the authors of the paper.

Sieh pioneered the field of paleoseismology years ago as a means of understanding past large earthquakes. His former student, Jing Liu, now a postdoctoral fellow in Paris, is the senior author of the paper.

In this particular study, Liu, Sieh, and their colleagues cut trenches parallel and perpendicular to the San Andreas fault at a site 200 kilometers (120 miles) northwest of Los Angeles, between Bakersfield and the coast. The trenches allowed them to follow the subsurface paths of small gullies buried by sediment over the past many hundreds of years. They found that the fault had offset the youngest channel by nearly 8 meters, and related this to the great (M 7.9) earthquake of 1857. Older gullies were offset progressively more by the fault, up to 36 meters. By subtracting each younger offset from the next older one, the geologists were able to recover the amount of slip in each of the past 6 earthquakes.

Of the six offsets discovered in the excavations, three and perhaps four were offsets of 7.5 to 8 meters, similar in size to the offset during the great earthquake of 1857. The third and fourth events, however, were slips of just 1.4 and 5.2 meters. Offsets of several meters are common when the rupture length is very long and the earthquake is very large. For example, the earthquake of 1857 had a rupture length of about 360 kilometers (225 miles), extending from near Parkfield to Cajon Pass. So, the five events that created offsets measuring between 5.2 and 8 meters likely represent earthquakes that had very long ruptures and magnitudes ranging from 7.5 to 8. Taken together, these five major ruptures of this portion of the San Andreas fault account for 95 percent of all the slippage that occurred there over the past thousand years or so.

The practical significance of the study is that earthquakes along the San Andreas, though infrequent, tend to be very large. Years ago, paleoseismic research showed that along the section of the fault nearest Los Angeles the average period between large earthquakes is just 130 years. Ominously, 147 years have already passed since the latest large rupture, in 1857.

The other authors of the paper are Charles Rubin, of the department of geological sciences at Central Washington University in Ellensburg, and Yann Klinger, of the Institut de Physique du Globe de Paris, France. Additional information about the site, including a virtual field trip, can be found at

Robert Tindol

Neuroscientists Demonstrate New Way to Control Prosthetic Device with Brain Signals

PASADENA, Calif.—Another milestone has been achieved in the quest to create prosthetic devices operated by brain activity. In the July 9 issue of the journal Science, California Institute of Technology neuroscientists Sam Musallam, Brian Corneil, Bradley Greger, Hans Scherberger, and Richard Andersen report on the Andersen lab's success in getting monkeys to move the cursor on a computer screen by merely thinking about a goal they would like to achieve, and assigning a value to the goal.

The research holds significant promise for neural prosthetic devices, Andersen says, because the "goal signals" from the brain will permit paralyzed patients to operate computers, robots, motorized wheelchairs—and perhaps someday even automobiles. The "value signals" complement the goal signals by allowing the paralyzed patients' preferences and motivations to be monitored continuously.

According to Musallam, the work is exciting "because it shows that a variety of thoughts can be recorded and used to control an interface between the brain and a machine."

The Andersen lab's new approach departs from earlier work on the neural control of prosthetic devices in that most previous results have relied on signals from the motor cortex of the brain used for controlling the limb. Andersen says the new study demonstrates that higher-level signals, also referred to as cognitive signals, emanating from the posterior parietal cortex and the high-level premotor cortex (both involved in higher brain functions related to movement planning), can be decoded for control of prosthetic devices.

The study involved three monkeys that were each trained to operate a computer cursor by merely "thinking about it," Andersen explains. "We have him think about positioning a cursor at a particular goal location on a computer screen, and then decode his thoughts. He thinks about reaching there, but doesn't actually reach, and if he thinks about it accurately, he's rewarded."

Combined with the goal task, the monkey is also told what reward to expect for correctly performing the task. Examples of variation in the reward are the type of juice, the size of the reward, and how often it can be given, Andersen says. The researchers are able to predict what each monkey expects to get if he thinks about the task in the correct way. The monkey's expectation of the value of the reward provides a signal that can be employed in the control of neural prosthetics.

This type of signal processing may have great value in the operation of prosthetic devices because, once the patient's goals are decoded, then the devices' computational system can perform the lower-level calculations needed to run the devices. In other words, a "smart robot" that was provided a goal signal from the brain of a patient could use this signal to trigger the calculation of trajectory signals for movement to be accomplished.

Since the brain signals are high-level and abstract, they are versatile and can be used to operate a number of devices. As for the value signals, Andersen says these might be useful in the continuous monitoring of the patients to know their preferences and moods much more effectively than currently possible.

"These signals could also be rapidly adjusted by changing parameters of the task to expedite the learning that patients must do in order to use an external device," Andersen says. "The result suggests that a large variety of cognitive signals could be interpreted, which could lead, for instance, to voice devices that operate by the patients' merely thinking about the words they want to speak."

Andersen is the Boswell Professor of Neuroscience at Caltech. Musallam and Greger are both postdoctoral fellows in biology at Caltech; Corneil is a former researcher in Andersen's lab who is now at the University of Western Ontario; and Scherberger, a former Caltech researcher, is now at the Institute of Neuroinformatics in Zurich, Switzerland.

Robert Tindol

"Minis" Have Mega Impact in the Brain

Embargoed: Not for Release Until 11:00 a.m. PDT Thursday, 24 June, 2004

PASADENA, Calif. — The brain is a maddeningly complex organ for scientists to understand. No assumption can remain unchallenged, no given taken as a given.

Take "minis" for example. That is, miniature excitatory synaptic events. The location where neurons communicate with each other is the synapse, the tiny gap between the ends of nerve fibers. That's where one nerve cell signals another by secreting special chemicals called neurotransmitters, which jump the gap. The synapse, and its ability to strengthen and wane, is thought to be at the heart of learning and memory. Minis, mere single, tiny packets of neurotransmitters, were always thought to have no biological significance, nothing more than "noise," or background chatter that played no role in the formation of a memory. Minis, it was thought, could be safely ignored.

Maybe not, says Mike Sutton, a postdoctoral scholar in the lab of Erin Schuman, an associate professor of biology at the California Institute of Technology, and an associate investigator for the Howard Hughes Medical Institute. Sutton, Schuman, and colleagues Nicholas Wall and Girish Aakalu report that on the contrary, minis may play an important role in regulating protein synthesis in the brain. Further, their work suggests the brain is a much more sensitive organ than originally perceived, sensitive to the tiniest of chemical signals. Their report appears in the June 25th issue of the journal Science.

Originally, Sutton et. al. weren't looking at minis at all, but at protein synthesis, the process through which cells assemble amino acids into proteins according to the genetic information contained within that cell's DNA. Proteins are the body's workhorses, and are required for the structure, function, and regulation of cells, tissues, and organs. Every protein has a unique function.

A neuron is composed of treelike branches that extend from the cell body. Numerous branches called dendrites contain numerous synapses that receive signals, while another single branch called an axon passes the signal on to another cell.

The original rationale behind the experiment was to examine how changes in synaptic activity regulate protein synthesis in a dendrite, says Sutton. His first experiment was a starting point to ask what happens when we first remove all types of activity from a cell, so he could then add it back later incrementally and observe how this affected protein synthesis in dendrites. "So we were going on the assumption that the spontaneous glutamate release--the minis--would have no impact, but we wanted to formally rule this out," he says.

Using several different drugs, Sutton first blocked any so-called action potentials, an electrical signal in the sending cell that causes the release of the neurotransmitter glutamate. Normally, a cell receives hundreds of signals each second. When action potentials are blocked, it receives only minis that arrive at about one signal each second. Next he blocked both the action potential and the release of any minis. "To our surprise, the presence or absence of minis had a very large impact on protein synthesis in dendrites," he says. It turned out that the minis inhibit protein synthesis, which increased when the minis were blocked. Further, says Sutton, "it appears the changes in synaptic activity that are needed to alter protein synthesis in dendrites are extremely small--a single package of glutamate is sufficient."

Sutton notes that it is widely accepted that synaptic transmission involves the release of glutamate packets. That is, an individual packet (called a vesicle) represents the elemental unit of synaptic communication. "This is known as the 'quantal' nature of synaptic transmission," he says, "and each packet is referred to as a quantum. The study demonstrates, then, the surprising point that protein synthesis in dendrites is extremely sensitive to changes in synaptic activity even when those changes represent a single neurotransmitter quantum.

"Because it's so sensitive," says Sutton, "there is the possibility that minis provide information about the characteristics of a given synapse (for example, is the signal big or small?), and that the postsynaptic or receiving cell might use this information to change the composition of that synapse. And it does this by changing the complement of proteins that are locally synthesized."

The ability to rapidly make more or fewer proteins at a synaptic site allows for quick changes in synaptic strength. Ultimately, he says, this ability may underlie long-term memory storage.

"It's amazing to us that these signals, long regarded by many as synaptic 'noise,' have such a dramatic impact on protein synthesis," says Schuman. "We're excited by the possibility that minis can change the local synaptic landscape. Figuring out the nature of the intracellular 'sensor' for these tiny events is now the big question."

Exclude from News Hub: 

Unexpected Changes in Earth's Climate Observed on the Dark Side of the Moon

PASADENA, Calif.—Scientists who monitor Earth's reflectance by measuring the moon's "earthshine" have observed unexpectedly large climate fluctuations during the past two decades. By combining eight years of earthshine data with nearly twenty years of partially overlapping satellite cloud data, they have found a gradual decline in Earth's reflectance that became sharper in the last part of the 1990s, perhaps associated with the accelerated global warming in recent years. Surprisingly, the declining reflectance reversed completely in the past three years. Such changes, which are not understood, seem to be a natural variability of Earth's clouds.

The May 28, 2004, issue of the journal Science examines the phenomenon in an article, "Changes in Earth's Reflectance Over the Past Two Decades," written by Enric Palle, Philip R. Goode, Pilar Montañes Rodríguez, and Steven E. Koonin. Goode is distinguished professor of physics at the New Jersey Institute of Technology (NJIT), Palle and Montañes Rodríguez are postdoctoral associates at that institution, and Koonin is professor of theoretical physics at the California Institute of Technology. The observations were conducted at the Big Bear Solar Observatory (BBSO) in California, which NJIT has operated since 1997 with Goode as its director. The National Aeronautics Space Administration funded these observations.

The team has revived and modernized an old method of determining Earth's reflectance, or albedo, by observing earthshine, sunlight reflected by the Earth that can be seen as a ghostly glow of the moon's "dark side"—or the portion of the lunar disk not lit by the sun. As Koonin realized some 14 years ago, such observations can be a powerful tool for long-term climate monitoring. "The cloudier the Earth, the brighter the earthshine, and changing cloud cover is an important element of changing climate," he said.

Precision earthshine observations to determine global reflectivity have been under way at BBSO since 1994, with regular observations commencing in late 1997.

"Using a phenomenon first explained by Leonardo DaVinci, we can precisely measure global climate change and find a surprising story of clouds. Our method has the advantage of being very precise because the bright lunar crescent serves as a standard against which to monitor earthshine, and light reflected by large portions of Earth can be observed simultaneously," said Goode. "It is also inexpensive, requiring only a small telescope and a relatively simple electronic detector."

By using a combination of earthshine observations and satellite data on cloud cover, the earthshine team has determined the following:

= Earth's average albedo is not constant from one year to the next; it also changes over decadal timescales. The computer models currently used to study the climate system do not show such large decadal-scale variability of the albedo.

= The annual average albedo declined very gradually from 1985 to 1995, and then declined sharply in 1995 and 1996. These observed declines are broadly consistent with previously known satellite measures of cloud amount.

= The low albedo during 1997-2001 increased solar heating of the globe at a rate more than twice that expected from a doubling of atmospheric carbon dioxide. This "dimming" of Earth, as it would be seen from space, is perhaps connected with the recent accelerated increase in mean global surface temperatures.

= 2001-2003 saw a reversal of the albedo to pre-1995 values; this "brightening" of the Earth is most likely attributable to the effect of increased cloud cover and thickness.

These large variations, which are comparable to those in the earth's infrared (heat) radiation observed in the tropics by satellites, comprise a large influence on Earth's radiation budget.

"Our results are only part of the story, since the Earth's surface temperature is determined by a balance between sunlight that warms the planet and heat radiated back into space, which cools the planet," said Palle. "This depends upon many factors in addition to albedo, such as the amount of greenhouse gases (water vapor, carbon dioxide, methane) present in the atmosphere. But these new data emphasize that clouds must be properly accounted for and illustrate that we still lack the detailed understanding of our climate system necessary to model future changes with confidence." Goode says the earthshine observations will continue for the next decade. "These will be important for monitoring ongoing changes in Earth's climate system. It will also be essential to correlate our results with satellite data as they become available, particularly for the most recent years, to form a consistent description of the changing albedo. Earthshine observations through an 11-year solar cycle will also be important to assessing hypothesized influences of solar activity on climate."

Montañes Rodríguez says that to carry out future observations, the team is working to establish a global network of observing stations. "These would allow continuous monitoring of the albedo during much of each lunar month and would also compensate for local weather conditions that sometimes prevent observations from a given site." BBSO observations are currently being supplemented with others from the Crimea in the Ukraine, and there will soon be observations from Yunnan in China, as well. A further improvement will be to fully automate the current manual observations. A prototype robotic telescope is being constructed and the team is seeking funds to construct, calibrate, and deploy a network of eight around the globe.

"Even as the scientific community acknowledges the likelihood of human impacts on climate, it must better document and understand climate changes," said Koonin. "Our ongoing earthshine measurements will be an important part of that process."

Robert Tindol


Subscribe to RSS - research_news