Neuroscientists locate area of brain responsible for 3-D vision

PASADENA--Researchers have found the brain circuitry that allows us to see the world in three dimensions even when one eye is closed.

In the current issue of the journal Nature, a team of neuroscientists at the California Institute of Technology reports that the middle temporal area (MT) of the brain renders certain visual motion cues into 3-D perceptions. Area MT is a small cortical area in each cerebral hemisphere located approximately an inch or two above the ears in both humans and non-human primates.

"We see the world in three dimensions even though the image in our retina is flat and in two dimensions," says Richard Andersen, who is the James G. Boswell Professor of Neuroscience at Caltech and principal investigator of the study, which was also conducted with postdoctoral fellow David Bradley and graduate student Grace Chang.

"So to derive the third dimension of depth, our nervous system has to process certain visual motion cues."

Andersen says that many people may assume that 3-D vision is explained solely by our having two eyes that provide a stereoscopic view of the world. But stereoscopic vision is fairly new in evolution, he says, while the depth-from-motion process is much more fundamental and primitive.

"In certain contrived situations, you can actually be better off by closing one eye," he says. For example, in a video animation his team created for the research project, an image of a cylinder is constructed entirely with points of light on a black field. When the cylinder is frozen, the viewer normally sees only a rectangular flat plane of light dots. But when the image is rotated, the viewer perceives a three-dimensional object.

"In this case, your stereoscopic vision may tell you the image is flat," he explains. "But area MT still overrides what you see with two eyes and gives you depth."

What's actually happening in the brain at such a time is a processing of the motions the eye perceives on the screen. While the spinning cylinder appears to have three dimensions, it actually comprises a series of dots that move horizontally across the screen at varying speeds. The dots near the edge of the cylinder image move more slowly across the screen, while the dots at the center move more quickly.

The brain picks up these varying speeds as natural motions in the world. The right and left edges of a cylinder naturally seem to move more slowly than the portion of the cylinder directly in front because the edges are moving forward and backward in that reference frame. And while there are no stereoscopic views in this display, the brain can still reconstruct the perception of depth using only the motions of the dots.

An especially important aspect of the study is the fact that viewers have a bias as to which direction they perceive the image to rotate, which changes spontaneously every few seconds. Because the cylinder is merely a group of dots moving at varying speeds, the image can appear to be rotating either clockwise or counterclockwise (see QuickTime movies below). Both human viewers and rhesus monkeys tend to perceive the cylinder as moving left to right (or counterclockwise), and then with time see it reverse.

"The beauty of this illusion is that the stimulus is always the same, but at different instances in time the perception is completely different," Andersen says.

"If the MT neurons were only coding the direction of motion of the dots in two dimensions, the cells would not change, since the physical stimulus never actually changes," he adds.

"So what we're actually watching is brain activity changing when an interpretation of the three-dimensional world changes," Andersen says.

Andersen says the research is aimed primarily at a fundamental scientific understanding of the biology of perception. However, the research could also eventually impact the treatment of human patients with vision deficits. In the very far future, the research could also perhaps be exploited for a technology leading to artificial vision.

 

smaller QuickTime movie (2.3MB)

 

larger QuickTime movie (5.9MB)

Writer: 
Robert Tindol

Caltech Biologists Pin Down Chain of Reactions That Turn On the Duplication of DNA

PASADENA—Caltech biologists have pinpointed the sequence of reactions that triggers the duplication of DNA in cells.

In companion papers appearing in recent issues of the journals Science and Cell, Assistant Professor of Biology Raymond Deshaies and his colleagues describe the chain of events that lead to the copying of chromosomes in a baker's yeast cell. Baker's yeast is often used as a model for human cells, so the research could have future implications for technology aimed at controlling cell reproduction, such as cancer treatments.

"We've provided a bird's-eye view of how a cell switches on the machinery that copies DNA," says Deshaies. "These principles can now be translated into a better understanding of how human cells proliferate."

The group's research keys primarily on how cells copy and segregate their chromosomes during the process of duplicating one cell into two. The new papers are concerned with how cells enter the DNA synthesis phase, during which the chromosomes are copied.

A question that cell biologists have sought for years to answer is that of which precise chemical events set off these reactions. The cell cycle is fundamental to the growth and division of all cells, but the process is somehow ramped down once the organism reaches maturity.

The paper appearing in Science describes how DNA synthesis is turned on. In the preceding stage (known as G1), proteins named G1 cyclins trigger the destruction of an inhibitor that keeps DNA synthesis from beginning.

This inhibitor sequesters an enzyme referred to as S-CDK (for DNA synthesis-promoting cyclin-dependent kinase), thereby blocking its action. Once the S-CDK is released, it switches on DNA synthesis. The S-CDK is present before the copying of DNA begins, but the DNA copying is not turned on until the S-CDK is freed of its inhibitor. The Deshaies group has shown that several phosphates are attached to the S-CDK inhibitor. These phosphates act as a molecular Velcro, sticking the inhibitor to yet another set of proteins called SCF.

The Cell paper essentially picks up on the description of the cell cycle at this point. The SCF, which acts like a molecular "hit man," promotes the attachment of another protein, ubiquitin. Ubiquitin in turn attracts the cellular garbage pail, proteasome. The inhibitor is disposed of in the proteasome, thereby freeing the S-CDK, which goes on to stimulate DNA duplication.

The process described above is quite complicated even in this condensed form, and actually is considerably more complicated in its technical details. But the detailed description that Deshaies and his colleagues have achieved is important fundamental science that could have technological implications in the future, Deshaies says.

"This traces the ignition of DNA synthesis down to a relatively small set of proteins," he says. "Any time you figure out how a part of the cell division machinery works, you can start thinking about devising new strategies to turn it on and off."

It is a precise turning on and off of DNA replication, many researchers think, that will someday be the key to better and more specific cancer-fighting drugs. Because a tumor is a group of cells that literally never stops the cell duplication cycle, a greater understanding of the cycle itself is almost certain to be a factor in further medical advances in cancer treatment.

"It could be five to 10 years, but this work could point the way to new cancer-fighting drugs," Deshaies says. "It is much easier to begin a rational approach to developing new treatments for cancer if you are armed with fundamental insights into how the cellular machinery works."

The other authors on the paper in the October 17 issue of Cell are R. M. Renny Feldman, a Caltech graduate student in biology; Craig C. Correll, a Caltech postdoctoral scholar in biology; and Kenneth B. Kaplan, a postdoctoral researcher at MIT.

The other authors of the Science paper from the October 17 issue are Rati Verma, a senior research fellow at Caltech; Gregory Reynard, a Caltech technician; and R. S. Annan, M. J. Huddleston, and S. A. Carr, all of the Research Mass Spectrometry Laboratory at SmithKline Beecham Pharmaceuticals in King of Prussia, Pennsylvania.

Writer: 
Robert Tindol
Writer: 

Caltech Scientists Devise First Neurochip

NEW ORLEANS—Caltech researchers have invented a "neurochip" that connects a network of living brain cells wired together to electrodes incorporated into a silicon chip.

The neurochips are being unveiled today at the annual meeting of the Society for Neurobiology, which is being held in New Orleans the week of October 25-30. According to Dr. Jerome Pine, one of the five coinventors of the neurochip, the technology is a major step forward for studying the development of neural networks.

The neurons used in the network are harvested from the hippocampus of rat embryos. Once the cells have been separated out by a protein-eating enzyme, each is individually inserted into a well in the silicon chip that is about half the diameter of a human hair. The cell is spherical in shape when it is inserted and is slightly smaller in diameter than the silicon chip well. When it is set in place and fed nutrients, it grows dendrites and an axon that spread out of the well.

In doing so, each neuron remains close to a single recording and stimulating electrode within the well, and also links up with other dendrites and axons attached to other neurons in other nearby wells.

According to Michael Maher, one of the coinventors, the neurochip currently has room for 16 neurons, which appear to develop normal connections with each other. "When the axons meet dendrites, they make an electrical connection," says Maher, who left Caltech in September to assume a postdoctoral appointment at UC San Diego. "So when one neuron fires, information is transmitted to the next neuron."

The neurochip network will be useful in studying the ways in which neurons maintain and change the strengths of their connections, Maher adds. "It's believed that memory in the brain is stored in the strength of these connections.

"This is pretty much a small brain connected to a computer, so it will be useful in finding out how a neural network develops and what its properties are. It will also be useful for studying chemical reactions at the synapses for weeks at a time. With conventional technology, you can record directly from at most a few neurons for at most a couple of hours."

There are two challenges facing the researchers as they attempt to improve the neurochips. One is providing the set of growth factors and nutrients to keep the cells alive for long periods of time. At present, two weeks is the limit.

The second challenge is finding a way to insert the cells in the silicon wells in a less time-consuming way. At present, the technique is quite labor intensive and requires a highly skilled technician with considerable patience and dexterity.

Other than the sheer effort involved, however, there is no reason that millions of cells could not be linked together at present, Maher says.

The other Caltech coinventors of the neurochip are Hanna Dvorak-Carbone, a graduate student in biology; Yu-Chong Tai, an associate professor of electrical engineering; and Tai's student, John Wright. The latter two are responsible for the silicon fabrication.

Writer: 
Robert Tindol
Writer: 

First Fully Automatic Design of a Protein Achieved by Caltech Scientists

PASADENA—Caltech scientists have found the Holy Grail of protein design. In fact, they've snatched it out of a giant pile of 1.9 x 1027 other chalices.

In the October 3 issue of the journal Science, Stephen L. Mayo, an Assistant Professor of Biology and a Howard Hughes Medical Institute Assistant Investigator, and chemistry graduate student Bassil I. Dahiyat report on their success in constructing a protein of their choice from scratch.

Researchers for some time have been able to create proteins in the lab by stringing together amino acids, but this has been a very hit-and-miss process because of the vast number of ways that the 20 amino acids found in nature can go together.

The number 1.9 x 1027, in fact, is the number of slightly different chains that 28 amino acids can form. And because slight differences in the geometry of protein chains are responsible for biological functions, the total control of formation is necessary to create new biological materials of choice.

By using a Silicon Graphics supercomputer to sort through all possible combinations for a selected protein, Mayo and Dahiyat have identified the target protein's best possible amino acid sequence. Then they have managed to take this knowledge and create the protein in the lab with existing technical processes.

This is a first, says Mayo. "Our goal has been to design brand-new proteins that do what we want them to do. This new result is the first major step in that direction. "Moreover, it shows that a computer program is the way to go in creating biological materials."

The technique they use, automated protein design, combines experimental synthesis of molecules with supercomputer-powered computational chemistry.

Proteins are the molecular building blocks of all living organisms. Composed of various combinations of the 20 amino acids, protein molecules can each comprise just a few hundred atoms, or literally millions of atoms. Most proteins involved in life processes have at least 100 amino acids, Mayo says.

Mayo and Dahiyat, who have been working on this research for five years, have developed a system that automatically determines the string of amino acids that will fold to most nearly duplicate the 3-D shape of a target structure. The system calculates a sequence's 3-D shape and evaluates how closely this matches the 3-D structure of the target protein.

One problem the researchers face is the sheer number of combinations needed to design a protein of choice. The protein that is the subject of this week's Science paper is a fragment of a fairly inconspicuous molecule involved in gene expression, and as such has only 28 amino acids. Even this small number takes a prodigious amount of computational power. A more desirable protein might involve 100 amino acids, which could make the staggering number of 10130 possible amino acid sequences.

Because this number is larger than the number of atoms in the universe, the researchers have had to find clever computational strategies to circumvent the impossible task of grinding out all the calculations.

In this case, the fastest way to the answer is by working backward. Starting with all the amino acid sequences possible for the protein, the computer program finds arrangements of amino acids that are a bad fit to the target structure. By repeatedly searching for, and eliminating, poorly matching amino acid combinations, the system rapidly converges on the best possible sequence for the target.

Subsequently, the simulation can be used to find other sequences that are nearly as good a fit as the best one.

This process has been honed by designing sequences for several different proteins, synthesizing them in the laboratory, and testing their actual properties.

With their innovative strategy, Mayo and Dahiyat are now reproducing proteins that are very similar to the target molecules. (The accompanying illustration shows how closely the protein they have formulated matches the target protein.)

But the goal is not just to create the proteins that already exist in nature. The researchers can actually improve on nature in certain circumstances. By making subtle changes in the amino acid sequence of a protein, for example, they are able to make a molecule more stable in harsh chemicals or hot environments (proteins tend to change irreversibly with a bit of heat, as anyone who has cooked an egg can attest).

"Our technology can actually change the proteins so that they behave a lot better," said Dahiyat, who recently finished his Caltech doctorate in chemistry and will now head Xencor, a start-up company established to commercialize the technology. The ability to create new proteins, and to adapt existing proteins to different environments and functions, could have profound implications for a number of emerging fields in biotechnology.

And, of course, it could help further the understanding of living processes.

"Paraphrasing Richard Feynman, if you can build it, you can understand it," says Mayo. "We think we can soon achieve a better understanding of proteins by going into a little dark room and building them to do exactly what we want them to do."

Writer: 
Robert Tindol
Writer: 

Caltech biologist named Beckman Young Investigator

PASADENA--Dr. Raymond Deshaies, a biochemist at the California Institute of Technology, has been named the newest Beckman Young Investigator by the Arnold and Mabel Beckman Foundation of Irvine, Calif. Deshaies will receive $200,000 over the next two years for his work on the mechanisms of cell division control. Much of his work concerns a single-celled organism familiarly known as baker's yeast, which for a variety of technical reasons is an excellent medium for fundamental research.

Dr. Steven E. Koonin, vice president and provost at Caltech, said in endorsing the nomination of Deshaies for the grant that his work is already taking advantage of the recently-completed genome sequence for yeast.

 

"Work done in many laboratories over the past several years has demonstrated that there is a remarkable similarity in the regulation of cell division among all eukaryotes--from the humble yeast cell to the far more complicated and varied ensemble of cells that comprise a human," Koonin wrote. "Cell division plays a critical role in normal development, and aberrations in the control of cell division can cause cancer."

 

Deshaies has been an assistant professor of biology at Caltech since January 1994. He earned his doctorate in 1988 at UC Berkeley, and has held postdoctoral appointments at both Berkeley and UC San Francisco.

 

The Arnold and Mabel Beckman Foundation, located in Irvine, Calif., makes grants to nonprofit research institutions to promote research in chemistry and the life sciences. Also, the grants are intended to foster the invention of methods, instruments and materials that will open up new avenues of research in science.

 

The Beckman Young Investigators program is intended to provide research support to the most promising young faculty members in the early stages of academic careers in the chemical and life sciences.

Exclude from News Hub: 
Yes

Scientists Find "Good Intentions" in the Brain

PASADENA—Neurobiologists at the California Institute of Technology have succeeded in peeking into one of the many "black boxes" of the primate brain. A study appearing in the March 13 issue of the journal Nature describes an area of the brain where plans for actions are formed.

It has long been known that we gain information through our senses and then respond to our world with actions via body movements. Our brains are organized accordingly, with some sections processing incoming sensory signals such as sights and sounds, and other sections regulating motor outputs such as walking, talking, looking, and reaching. What has puzzled scientists, however, is where in the brain thought is put into action. Presumably there must be an area between the sensory incoming areas and the motor outputting areas that decides or determines what we will do next.

Richard Andersen, James G. Boswell Professor of Neuroscience at Caltech, along with Senior Research Fellow Larry Snyder and graduate student Aaron Batista, chose the posterior parietal cortex as the likely candidate to perform such decisions. This is a high-functioning cognitive area and is the endpoint of what scientists call the visual "where" pathway. Lesions to the parietal cortex of humans result in loss of the ability to appreciate spatial relationships and to navigate accurately.

As Michael Shadlen of the University of Washington says in theNature "News and Views" commentary on the latest findings, "Nowhere in the brain is the connection between body and mind so conspicuous as in the parietal lobes—damage to the parietal cortex disrupts awareness of one's body and the space that it inhabits."

It is here, Andersen postulates, that incoming sensory signals overlap with outgoing movement commands, and it is here where decisions and planning occur. Numerous investigations had assumed a sensory map of external space must exist within the parietal cortex, so that certain subsections would be responsible for certain spatial locations of objects such as "up and to the left" or "down and to the right." Previous results from Andersen's own lab however had led him to question whether absolute space was the driving feature of the posterior parietal map or whether, instead, the intended movement plan was the determining factor in organizing the area.

In a series of experiments designed so that the scientists could "listen in" on the brain cells of monkeys at work, the animals were taught to watch a signal light and, depending on its color, to either reach to or look at the target. When the signal was green they were to reach and when it was red they were only to look at the target. An important additional twist to the study was that the monkeys had to withhold their responses for over a second.

The scientists measured neural activity during this delay when the monkeys had planned the movement but not yet made it. What they found was that different cells within different regions of the posterior parietal cortex became active, depending not so much on where the objects were but rather on which movements were required to obtain them. It seems then that the same visual input activates different subareas depending on how the animal plans to respond.

According to Andersen, this result shows that the pathway through the visual cortex that tells us where things are, ends in a map of intention rather than a map of sensory space as had been previously thought. According to Shadlen these results are intriguing because they indicate that "for the brain, spatial location is not a mathematical abstraction or property of a (sensory) map, but involves the issue of how the body navigates its hand or gaze." Andersen feels the study is important because it demonstrates that "our thoughts are more directly tied to our actions than we had previously imagined, and the posterior parietal cortex appears to be organized more around our intentions than our sensations."

Writer: 
Robert Tindol
Writer: 

Neuroscientists Single Out Brain Enzyme Essential To Memory and Learning

PASADENA— Researchers have singled out a brain enzyme that seems to be essential in memory retention and learning.

The enzyme is endothelial nitric oxide synthase (eNOS), and is found in microscopic quantities near the synapses, or nerve junctions. In today's issue of Science, California Institute of Technology neuroscientist Erin Schuman, her colleague Norman Davidson, and their six coauthors write that the gas nitric oxide (NO) produced by eNOS has been demonstrated in rat brains to be crucial for "long-term potentiation," which is the enhancement of communication between neurons that may make memory and learning possible.

"This study shows how memory may be stored by changing the way neurons talk to one another," says Schuman, who has worked for years on the role of chemical messengers in learning and memory.

In short, the chemical signals interchanged between neurons during memory formation somehow make future signal transmissions occur more readily. Whatever the precise chemical nature of the exchange, Schuman says that there is a feedback mechanism at the basis of long-term potentiation—a "retrograde messenger" likely to be NO—and that this messenger is what makes learning and long-term memory possible.

Scientists have known for some time that the gas nitric oxide is important in certain physiological processes, says Schuman. Further, her own work in the last couple of years has shown that long-term potentiation can occur even when neurons are not directly connected to one another, presumably because NO is a gas that can diffuse between neurons. Evidence has pointed to nitric oxide as a component in this mechanism despite the fact that rats with a defective gene for manufacturing a closely related form of nitric oxide synthase known as nNOS have no problems with long-term potentiation.

The new study shows that eNOS, however, is crucial in the mediation of signals between neurons. The authors demonstrated this by manipulating a common virus in such a way that it performed like a "Trojan horse." The region of the virus responsible for illness was eliminated, and the gene inserted into the virus was chosen for its action on brain chemistry. The virus infected the neurons and forced the cells to manufacture the protein encoded for by the inserted gene.

One viral construct blocked the function of eNOS in the hippocampus of the rodents, while another restored the eNOS function. The end results showed that eNOS is crucial for long-term potentiation.

Schuman says that while there is no immediate application for the finding, the greater molecular understanding of how brain cells change their properties is an important basic result in itself. Too, the use of viral vectors in understanding brain chemistry is a new approach, and somewhere down the line might be considered as a strategy for gene therapy.

"This gives us a good idea of a model for how brain cells change during learning," Schuman says.

Also involved in the work are Caltech neuroscientists David B. Kantor, Markus Lanzrein, Gisela M. Sandoval, W. Bryan Smith, S. Jennifer Stary, and Brian M. Sullivan.

Writer: 
Robert Tindol
Writer: 

Neural Research Shows That the Nose Needs Time To Smell

PASADENA— New research from the California Institute of Technology shows that it literally takes some time to smell the roses.

In the current issue of Nature, Caltech neuroscientists Michael Wehr and Gilles Laurent present work demonstrating that information about odors is contained in the temporal activity patterns of groups of neurons over an interval of time.

"Perfumers sometimes speak of 'top notes' and 'medium notes' in a bouquet," says Laurent, associate professor of biology and computation and neural systems. "These refer to early and late perceptions that unfold over time during long sniffs or successive sniffs. Our new research suggests that the brain is actually representing odors by making a neural melody of its own."

A helpful analogy Laurent offers for the research is the musical notes that make up a tune. A listener can perceive one note in an instant, but must listen for a time before he or she can recognize the tune. Therefore, the specific manner in which the notes follow one another is the very thing that gives a song its individual character.

"It is the order in which specific neurons are activated that appears to contain useful information about the identity of the odor," says Laurent, adding that different odors cause different neural "melodies."

Laurent and Wehr, a graduate student in computation and neural systems, did their research by analyzing the brain waves of locusts. When an odor was wafted by the olfactory organ of the locust, the collective response of neurons in the olfactory brain was such that specificity in the responses arose from considerations of their temporal characteristics. And because olfactory systems are very similar among most animals, the researchers think that these coding principles may be common to most, including humans.

What happens in the brain during the act of smelling is not well understood, but has been known for a long time to involve neural synchronization and oscillations of the EEG. The function of oscillations, which are observed also in all other sensory areas of the brain, remains totally speculative.

"If our hypothesis is right, oscillations are a kind of clock for the temporal codes we observe," Laurent says. In a parallel study by Laurent and Caltech behavioral biology graduate student Katrina MacLeod in the November 8 issue of Science, the authors described a method by which the neurons representing odors can be simply desynchronized, thereby eliminating the clock signal for the temporal codes. This result will now allow the researchers to directly test, in future experiments, whether these temporal codes are essential for odor perception.

The conclusions of the researchers is that animals as primitive as snails and as complex as humans do some mental "data crunching" each time they pick up a smell. This allows the neurons to separate a certain odor from the background, provided a window of time is available.

In a manner of speaking, the research shows that time is of the essence, and vice versa.

Writer: 
Robert Tindol
Writer: 

Caltech Biologists Identify Gene Thought to Initiate Neural Development

PASADENA— Biologists have identified a gene that determines whether a given cell in a human or animal embryo will become a neuron rather than some other kind of cell.

In an article appearing in today's journal Cell , California Institute of Technology Professor of Biology David Anderson and his colleagues announce that the gene encodes neurogenin, a member of the basic-helix-loop-helix (bHLH) family of proteins, which in turn control the activity of other genes. When neurogenin RNA appears in cells of the early embryo, the research shows, a genetic chain-reaction begins that turns the cell into a neuron.

According to Anderson, the discovery provides an important piece of information about how embryonic cells develop into cells with specific functions and locales within an organism. Up to a certain stage, all cells in an early embryo look alike in a microscope. But forces are at work that determine whether a specific cell will be a neuron, a muscle cell, a germ cell for sexual reproduction, or any of the other types of cell that make up an organism.

"It's been clear for decades that cells are different in an invisible way long before we are able to see them as being visibly different," says Anderson. "The idea has been that there must be specific genes that confer this invisible predisposition on particular cells."

To demonstrate that neurogenin indeed fulfills such a function, Anderson's coauthor Chris Kintner of the Salk Institute injected tiny amounts of neurogenin RNA in the embryo of a toad. Kintner performed the procedure on the left side of toad embryos at the two-cell stage, so that the effect of the injection could be traced as early in development as possible. The right side was left untouched so that it could serve as a "control."

As each embryo continued to grow by means of cell division, the side that had been flooded with neurogenin RNA became filled with neurons, while the right side developed in a normal manner. This indicated to the researchers that neurogenin is the substance that begins the cascade of genetic steps that turn an undifferentiated cell into a neuron.

Importantly, Anderson, Kintner and colleague Qiufu Ma (a research fellow in biology at Caltech) showed that, once cells make neurogenin, they inhibit their neighbors from becoming neurons by inhibiting their production of neurogenin. Thus, uncommitted embryonic cells are engaged in a winner-take-all competition to become neurons, the winner being decided by the cell that makes the highest level of neurogenin.

The research also showed that other genes suspected of being the initiators of neural development, such as neuroD, actually come into play later after the process has been started by neurogenin. Moreover, the fact that mouse neurogenin RNA can also be successfully used to artificially activate neurogenesis in frog embryos suggests that little difference exists in the gene from species to species.

"The neurogenin (used in the research) is probably about 80 percent identical in its 'business end' to that in humans," Anderson said.

The research builds on earlier work done by J. E. Lee and the late Harold Weintraub at the Fred Hutchinson Cancer Research Center in Seattle, Washington, as well as work done by others on the early development of nerve cells in fruit flies.

Writer: 
Robert Tindol
Writer: 

Paul Sternberg Receives Grant from the Seaver Institute for Molecular Genetics Research

PASADENA—Paul Sternberg, professor of biology at the California Institute of Technology, has been awarded a one year, $100,000 grant from the Seaver Institute in support of his work in molecular genetics.

Sternberg's research identifies and studies genes necessary for normal development. His research has contributed to the understanding of the universal pathway of signaling between animal cells. He hopes this current work will lead him to a better knowledge of basic life processes and the formation of cancer and its progression.

Sternberg, a graduate of Hampshire College in 1978 with a BA in biology and mathematics, and the Massachusetts Institute of Technology in 1984 with a PhD in biology, joined the Caltech faculty in 1987. In 1989 he became an assistant investigator at the Howard Hughes Medical Institute, advancing to associate investigator in 1992.

The Seaver Institute was established in 1955 by Frank Roger Seaver. Built around a deep respect for individual achievement and an unwavering commitment to excellence, the foundation has always reflected his concern for academic performance, scientific investigation, and cultural expression. Since his death in 1964, the Institute's central aims have remained consistent with this philosophy. The Seaver Institute focuses its giving program on four essential areas: scientific and medical research, education, public affairs, and the cultural arts.

Pages

Subscribe to RSS - BBE