Question of the Week: What Causes a Gene To Mutate or Change?

Submitted by Virginia Salazar, Whittier, Calif. and answered by Dr. Paul Sternberg, Professor of Biology, Caltech

In most cases, the sequence of DNA making up a gene is copied accurately when a cell divides. This accurate process ensures that each cell is like its parent cell. DNA consists of a string of DNA bases, the letters in the genetic alphabet.

The bad news is that DNA is under continual attack by chemicals within the cell that are byproducts of the ordinary workings of each cell; by environmental hazards; by radiation; and by the general tendency for things to break down. Environmental hazards include natural plant products as well as human-made chemicals. These attacks result in a range of problems, ranging from changes of a single DNA letter to a break in the string.

The good news is that cells counter these continual attacks by correcting essentially all the damage, using a host of beautiful molecular machines. But a mutation occurs when a cell fails to repair damage to its DNA, or repairs it incorrectly. When such a cell divides, it passes on the mutated gene to its progeny. Eggs and sperm, which join to form an embryo, are themselves the product of cell divisions and thus subject to errors in the copying of DNA. These mutations are passed on to our children.

Other cells in our bodies are subject to mutation, and mutant cells can become cancerous. Particularly pernicious are mutations that disrupt the ability of a cell to repair its own DNA. Such mutations are in the genes that are responsible for making the repair machinery. When this occurs, the mutant cell will more easily continue to mutate, a disaster in the making!

Writer: 
Robert Tindol
Writer: 

Question of the Week: How Often Do Meteors Fall To Earth?

Question of the Month Submitted by Bob and Pat Gaskill, Orange County, and answered by Dr. William Bottke, Texaco Prize Fellow, Division of Geological and Planetary Sciences, Caltech.

Meteors and meteorites are small rocky fragments of other planetary bodies that fall to Earth. When they do so, they often produce spectacular audible and visual effects that can be seen from the ground. Meteorites, objects that survive their fiery passage through Earth's atmosphere, are of particular interest to scientists, since they are pieces of planetary bodies (mostly asteroids) for which samples have not yet been obtained through either manned or unmanned space missions. The oldest meteorites are remnants of the very first processes to occur in our solar system 4.6 billion years ago, giving us a glimpse into what conditions were when Earth was formed.

One common class of meteor is called a "fireball," named for the bright, streaming orbs produced when the surface of a fist-sized or larger body is boiled away by friction as it enters Earth's atmosphere. Fireballs decelerate from speeds of about 60,000 m.p.h. to 200 m.p.h. during this passage, often slowing enough at the end so that they literally drop to the ground. Their flight path is similar to a golf ball thrown at an angle into a swimming pool; once the water stops the forward momentum of the ball, it sinks to the bottom of the pool. The meteor is often not strong enough to survive this passage intact, which can make recovery of the fragments difficult.

Fireballs are mostly seen crossing the sky at night, though some are so bright they can be seen during the day. When a fireball is seen, it is usually several miles high. If any surviving meteoritic pieces were to survive to reach the ground, they would probably be over 500 miles from the observer. If enough people see the fireball from separate locations, however, scientists may be able to calculate where the fragments should strike Earth.

Studies indicate that about 25 meteorites weighing more than a fifth of a pound fall on California (or an area of equal size) each year. Three or four of these samples weigh about two pounds and are the size of your fist. Using these values, we can estimate that between 300 and 400 of these larger meteorites have fallen on California since the turn of the century. Most of these rocks, though, have not been found, leaving open the possibility that you yourself may discover one someday.

Caltech Astronomers Obtain the Most Detailed Infared Image of the Environment of an Active Black Hole

TORONTO — Sophisticated imaging techniques applied on the Keck Telescope have uncovered a new structure in a nearby active galaxy.

The image and associated research are being presented today at the semiannual meeting of the American Astronomical Society. Alycia Weinberger, a doctoral student in physics at the California Institute of Technology, and her collaborators have used the computer-intensive technique of speckle imaging and the 10-meter W. M. Keck Telescope atop Mauna Kea, Hawaii, to image the nucleus of NGC 1068.

This galaxy, found in the constellation Cetus at a distance of about 50 million light years, reveals a a bright active nucleus at infrared wavelengths. This nucleus has long been thought to harbor a black hole as its central engine and, because it is bright and nearby, has been intensely studied by astrophysicists.

The accompanying false color image shows an elongated structure, which is over 100 light-years across, centered on a bright point-like infrared nucleus. In contrast, the bright disk of the galaxy NGC 1068 is over 30,000 light-years across at visual wavelengths.

Made at a wavelength of 2.2 microns, Weinberger's near-infrared image has the capability to reveal structures which are only 12 light years across. This is an extremely small distance by galactic standards, as small as about three times the distance between the Sun and its nearest stellar neighbors. Although taken from a ground based observatory, this image has resolution as fine as what the Hubble Space Telescope achieves in the visual part of the spectrum. The space telescope does not currently have an infrared camera, but is scheduled to receive one in 1997. The elongated feature discovered by the Caltech group has not been seen in Hubble's optical images.

There are two very interesting aspects of this image. First, the image is elongated, and second the axis of the emission points in a different direction than previously observed visual emission. The near-infrared light used to make this picture typically traces the distribution of hot dust and cool stars.

However, in NGC 1068, it is very unlikely that there could be dust 100 light-years from the central black hole which would be hot enough to produce the observed emission. Rather, Weinberger says, it is likely that the observed extended near-infrared light is from stars. Furthermore, since it points in a different direction, this newly resolved infrared emission is likely to come from an entirely different source than previously observed visual emission.

It has long been proposed that stellar bars are a way of funneling material to an active nucleus. As gas moves in a non-circular distribution of stars, such as what may be seen in Weinberger's image, it is forced into orbits likely to take it near the central black hole. This provides a continuous mechanism for "feeding" the central engine.

"The significance of this research is that it finds a brand-new feature in this galaxy. And even more, this new feature may provide observational evidence for a theoretically predicted means of channelling material to the black hole on very small scales," Weinberger says. The image is by no means detailed enough to show the in-fall of the matter itself, Weinberger stresses. For this, one would need a resolution of less than a light-year, and there is currently no way to make such finely detailed pictures.

Nonetheless, the quality of this image is unparalleled because it relies on the unique resolving power of Caltech's 10-m Keck Telescope and the technique of speckle interferometry to remove the distorting effects of Earth's atmosphere. With this technique, a series of very rapid exposures are made of the object, freezing the atmospheric distortions that cause stars to "twinkle." Then the distortions are removed in computer post-processing. As the largest infrared telescope in the world, the Keck Telescope provides the best obtainable resolution.

Weinberger is currently completing work on her doctorate. She will continue doing observations to support this research, a part of her thesis. "It will be exciting to look at NGC 1068 with similar resolution in other infrared wavelengths," she says. "The more information we have across the spectrum the more we'll understand about the nature of this extended emission."

Also collaborating in this research are her thesis supervisor, Gerry Neugebauer and Keith Matthews both of the Caltech physics department.

Writer: 
Robert Tindol
Writer: 

Question of the Week: All the Planets Spin West To East, Except One. Why Does It Spin In the Opposite Direction?

Question of the Month Submitted by Michael Dole, Covina, Calif., and answered by Peter Goldreich, Lee A. DuBridge Professor of Astrophysics and Planetary Physics at Caltech.

You're undoubtedly thinking of Venus as the planet that spins east to west. In other words, if you arrived on Venus in the morning, the sun would be in the west and would set in the east. The only thing is that it would set about four Earth-months later! That's because a day on Venus lasts for 243 of our Earth-days.

Actually, you should probably add Uranus to your list of planets in retrograde (or "backward") rotation, because it is tipped more than 90 degrees. The day would be a short one, because Uranus completes a rotation on its axis every 17 hours, which is a pretty typical time for all the gas giants. The Uranian year is 84 Earth years. Over that time there are large seasonal variations at the poles as they alternately point toward and away from the sun.

As a rule, the inner planets (the solid ones) have much longer spin periods. Mercury completes three rotations every time it goes around the sun once because it is in a tidal lock with the sun, in a manner similar to the tidal lock that causes the moon to always face Earth. A day there lasts about 30 Earth-days.

Mars has the same spin period as Earth, but the angle between its spin axis and the axis of its orbital angular momentum is predicted to vary chaotically between about 11 and 44 degrees on a time scale of millions of years. This is due to the gravity of the sun and other planets. So if you go to Mars now, the sun would rise in the east southeast if you landed at a Southern California latitude during the summer. But if you wait a few million years, the planet might be so tilted that the sun would come up a few degrees north of east each morning while you were at that same latitude at the same time of year.

To get back to your question, nobody knows why the planets have the spins they have. It's plausible that the spin rates date back to the formation stage of the solar system, which began about 4.6 billion years ago and lasted about half a billion years. Because fairly big bodies were being gobbled up by the planets that we observe today, the inclinations of the axes as well as the spin rates are probably relics of these collisions.

Probably, both Venus and Uranus originally rotated from west to east, just like the other seven planets. Perhaps the collisions of other bodies with these two planets flipped them over permanently. In the case of Venus, the tidal effect of the sun's gravity also undoubtedly had a profound effect.

Writer: 
Robert Tindol
Writer: 

Question of the Week: Why Is the Night Dark and Not As Light As the Day?

Submitted by Jim Early, Orange County, and answered by Dr. Roger Blandford, Richard Chace Tolman Professor of Theoretical Astrophysics and Executive Officer for Astronomy; and David Hogg, Caltech graduate student in physics.

A similar question was asked by Shawn McCord, age 8, of Covina.

This is one of the oldest and most fundamental observations in cosmology, known as "Olbers' paradox." After all, if the universe is infinite and filled with an infinite number of stars, shouldn't every line of sight from Earth hit the surface of a star somewhere? Everywhere you look, you ought to be looking at something as bright as the surface of the Sun.

The sky is dark because the universe is of finite age, born roughly twelve billion years ago in the Big Bang. Because light travels at a finite speed, the part of the universe we can observe is not infinite. In fact, the radius of the visible universe is given by the distance light can travel in twelve billion years: twelve billion light years. Not every line of sight hits the surface of a star; in fact most get to the edge of the visible universe without encountering anything at all. So the night sky is dark, to human eyes.

On the other hand, although the night sky looks dark to us, it is actually very bright with microwaves which make up the cosmic background radiation, the relict light from the Big Bang. For the first three hundred thousand years after the Big Bang, the universe was so hot and dense that it was opaque and glowed like a star. Because light travels at a finite speed, objects observed at a very great distance are also being seen as they appeared a long time ago. So the rays of light which come from the edge of the visible universe were emitted when the universe transitioned from its opaque to its transparent state. If the universe were not expanding, this "surface" in time would glow bright like the surface of a star.

However, the universe is expanding, so the light waves are stretched to longer and longer wavelengths as they travel, and are now stretched into microwaves. To an astronomer with a radio telescope that is capable of detecting microwaves, the night sky does indeed appear as bright as the surface of a star—not because the visible universe is infinite and filled with stars, but because, early on, the universe itself shone bright like an immense star.

Question of the Week: Could There Possibly Be New Elements In the Universe That Haven't Been Detected?

Question of the Month Submitted by Rick Conner, Laguna Niguel, and answered by Donald Burnett, Professor of Geochemistry, Caltech.

The answer is yes.

Elements are numbered according to the number of protons they contain. For example, hydrogen, the first element on the periodic table, has one proton. Oxygen has eight, iron has 26, and gold has 79. Uranium, with 92 protons, is the heaviest element that has been detected elsewhere in the universe by astronomers.

The current periodic table contains about 107 elements, but the ones heavier than uranium have been detected only after being artificially produced in the laboratory. Elements 100 through 107 are especially unstable and difficult to make. Only a few atoms of each have been produced, and these are radioactive, decaying in a few seconds or less.

Theoretically, however, elements having atomic numbers in the range of 109 through 114 should be comparatively stable. In fact, one or more of these "island of stability" elements could exist for a year, or perhaps even billions of years, before decaying.

What we do know is that, for the elements below 100, nature has produced all possible stable nuclei. All of the elements can be produced in the burning processes of stars, but many nuclei heavier than iron—and uranium in particular—are made by spectacular processes such as supernova explosions. And since every single element through uranium can be found both on Earth and elsewhere in the universe, the question is whether nature has filled in the elements between 109 and 114 as well.

Many searches have been made, but natural superheavy elements have not been found. This may mean that the lifetimes of the superheavy elements are relatively short, or that concentrations of superheavy atoms are so small that they were missed, or that there is no way to synthesize superheavy elements in stars.

So the final answer to your question will be left for advanced science of the 21st century.

Question of the Week: Why Can't We Manufacture Ozone To Be Released Where Needed In the Atmosphere?

Submitted by Ann Marchillo, Glendora, Calif., and answered by Matt Fraser and Patrick Chuang, graduate students in environmental engineering at Caltech.

Ozone is a molecule containing three atoms of oxygen, and is known by the chemical symbol "O3." The stuff we breathe is "O2," which contains two atoms of oxygen. The "ozone hole" is a decrease in the amount of ultraviolet-absorbing ozone in the upper part of the earth's stratosphere, about 10 miles above ground level.

It is helpful to think of the stratosphere as a water tank with a faucet in the bottom. The water level corresponds to the amount of ozone in the stratosphere. Ozone is continuously being created as long as the sun is shining. In our analogy, then, this means that while the sun is up, someone is pouring water into this water tank. However, ozone is easily destroyed by chemical reactions, represented by water flowing out the faucet. At any one time, the amount of ozone in the stratosphere is regulated by the balance beween its generation and destruction (the water pouring into the tank and the water draining from the tank). The ozone hole is mainly due to chlorofluorocarbons, or CFCs, emitted into the atmosphere which cause ozone to be destroyed more quickly (opening the faucet more), thereby causing ozone levels to decrease.

However, unlike a hole in the ground, the ozone hole cannot be filled in once to solve the problem. To fix the ozone hole, we need to constantly pour more ozone into the stratosphere. This is not a reasonable solution, primarily because the amount of energy needed to pump ozone into the stratosphere is overwhelming. It would require over 500 billion watts of power to constantly pour the necessary ozone into the stratosphere to make up for what CFCs destroy. For comparison, Hoover Dam generates about a billion watts. So our "solution" would require 500 Hoover Dams just to pump the ozone into the stratosphere!!

When you think about the pollution from 500 large power plants, such a solution might be worse than the original problem.

Writer: 
Robert Tindol
Writer: 

Caltech Geophysicist Offers Evidence For New View of Earth's Inner Workings

SAN FRANCISCO—In two closely related presentations today at the annual American Geophysical Union conference, Caltech geophysicist Don Anderson will describe work suggesting a radical new interpretation of how Earth operates inside. The work is based on recently declassified satellite imagery as well as a revisiting of the issue of primordial helium (the 3He isotope) within Earth.

"It's becoming more and more clear that Earth is driven from above by motion of the lithosphere (the cold outer shell of Earth) and cold 'fingers' sticking down under continents into the mantle," Anderson said in an interview prior to the conference. "So rather than Earth being like a pot on a stove that gets heated from below and boils, it's more like a glass of iced tea where ice cubes cause cold downwellings in the liquid beneath it from thermal convection, and cause cracking in the 'lid' that permits volcanism."

Anderson's presentation of two lectures on the topic is appropriate because he has been working at the problem for the last eight months from two directions. The first, the satellite imagery evidence, is based on highly accurate global satellite gravity data compiled by David Sandwell of the Scripps Institution of Oceanography and Walter Smith of the National Oceanic and Atmospheric Administration.

These maps show that many hot spot tracks (chains of volcanoes) exist along preexisting cracks on the plate. Others exist where new cracks are forming as the oceanic plate bends to go under other plates—for example, at the Chile Trench and near Samoa.

According to Anderson's analysis of the maps, the evidence is compelling that there are five regions in the Pacific Ocean where hot spots of underwater volcanic activity can be associated with new fractures in Earth's crust. Two of the hot spots are located a few hundred miles to the west of Chile, the third is near Samoa, the fourth is on the Easter Microplate near Easter Island, and the fifth is near the Galapagos Islands. Many other "hot spot tracks" are along ancient fractures.

All of the hot spots were previously thought to be random outcroppings, Anderson says. In the old interpretation, these hot spots are caused by the molten mantle burning through the cooler mantle and the crust from a boundary near the core.

However, the nature of the data supplied by the satellite imagery allows the fabric of the crust at those points to be inferred. The structure of the seafloor suggests that these hot spots show fracturing of the crust. The conclusion, then, is that bending of the crust and locations of previous crustal boundaries (faults) are creating weak spots in the lithosphere, which in turn make the hot spots inevitable because hot mantle can penetrate and break through the plate. The fault lines are pressure-release valves.

Another helpful analogy Anderson offers is the polar oceans. In the Arctic and Antarctic, the oceans are not driven by heating from below, but by winds blowing across the surface and icebergs cooling parts of the surface. Earth's mantle, like the Arctic ocean, is driven from above, not below. Even though the ultimate source of energy is from primordial heat and radioactivity, the continents and tectonic plates form a surface template that controls the shape of convection and the locations of hot upwellings.

Anderson's other line of reasoning is based on a reinterpretation of the ratio of primordial helium, or 3He, in Earth's mantle to 4He, which is created by the decay of uranium and thorium. Primordial helium, having two protons and one neutron, was created in the early stages of the universe and is conventionally thought to have sat more or less unperturbed within Earth for billions of years. The 4He isotope, by contrast, has two protons and two neutrons, and has been created far more recently within Earth by the radioactive decay of uranium and thorium.

The previous geophysical interpretation was that a high ratio of primordial helium to 4He in basalt was evidence that the lava is an upgushing from the molten magma thousands of kilometers below the surface. Thus, the primordial helium ratio was thought to support the view that a volcanic eruption was solely the result of an upsurge of the lava from the primordial mantle, because the primordial store of helium could be envisioned as existing only at great depths.

It has always been a mystery, Anderson explains, how any part of Earth could have survived the violent impacts during accretion, or planet-building, without melting or vaporizing. That is, it's difficult to see how any part of Earth could have remained "primordial."

However, Anderson's new interpretation suggests that the ratio may not presuppose a relatively high amount of primordial helium, but rather a relatively low amount of 4He. This is in keeping with evidence that Earth's mantle just below the crust, the lithosphere, is low in uranium and thorium. Cracking of the lithospheric plate allows access to helium-rich "bubbles."

The new interpretation suggests that geological processes cause tectonic plates to stretch and break, and that volcanic lava from an eruption utilizes these weak zones rather than being the result of deep narrow hoselike upwellings of magma. The strange chemistry of hot-spot lavas, such as at Hawaii and Iceland, is the result of near-surface contamination (crustal, lithosphere, and ocean), rather than a property of the deepest mantle.

The heat of the magma is involved in the convection processes, Anderson notes, but the old idea of a chute of molten material blowing all the way from the deepest mantle to the upper surface can no longer be supported.

"Volcanoes with high ratios of helium isotopes can be interpreted as new cracks in the lithosphere," Anderson concludes, noting that the helium evidence further supports the "top-down" evidence from the new satellite imagery. "Volcanoes are more like grass growing through cracks in the sidewalk rather than tree roots which break the pavement.

"This turns everything upside-down."

Writer: 
Robert Tindol
Writer: 

Question of the Week: Why Didn't the Ice Found at the South Pole of the Moon Sublimate Away, Like Ice Cubes Do in a Refridgerator?

Submitted by Dean Bessette, Huntington Beach, Calif., and answered by Dave Stevenson, George Van Osdol Professor of Planetary Science, California Institute of Technology

As everyone with a refrigerator knows, ice cubes tend to shrink over time. And if you have an old-style refrigerator, you may have observed that the ice molecules go directly from the ice cubes to the walls of the freezer compartment without ever becoming liquid water. This is the process of sublimation, and your question about its relevance to the water found on the moon is a good one.

The answer is that it takes more energy than is available in a crater at the south pole of the moon to make the molecules fly off the ice in significant numbers. The refrigerator analogy is appropriate to a point, but there is far more energy in your typical freezing compartment than in a shadowy area on the moon where the sun never shines. And since there is not much internal heat coming from within the moon, there simply isn't enough energy available to sublimate this very cold ice.

It has been estimated that if you have water ice sitting in a vacuum, you need to keep the temperature at about 150 degrees Kelvin—or minus 123 degrees Celsius—for sublimation to be significant over geological time. And since the temperature in a shadowy crater of the moon could be much lower than this, there simply hasn't been enough time for the ice to sublimate, assuming there was a reasonable amount of ice to begin with. So the ice the space probe Clementine detected could have been there for billions of years. How's that for a refrigerator?

Neuroscientists Single Out Brain Enzyme Essential To Memory and Learning

PASADENA— Researchers have singled out a brain enzyme that seems to be essential in memory retention and learning.

The enzyme is endothelial nitric oxide synthase (eNOS), and is found in microscopic quantities near the synapses, or nerve junctions. In today's issue of Science, California Institute of Technology neuroscientist Erin Schuman, her colleague Norman Davidson, and their six coauthors write that the gas nitric oxide (NO) produced by eNOS has been demonstrated in rat brains to be crucial for "long-term potentiation," which is the enhancement of communication between neurons that may make memory and learning possible.

"This study shows how memory may be stored by changing the way neurons talk to one another," says Schuman, who has worked for years on the role of chemical messengers in learning and memory.

In short, the chemical signals interchanged between neurons during memory formation somehow make future signal transmissions occur more readily. Whatever the precise chemical nature of the exchange, Schuman says that there is a feedback mechanism at the basis of long-term potentiation—a "retrograde messenger" likely to be NO—and that this messenger is what makes learning and long-term memory possible.

Scientists have known for some time that the gas nitric oxide is important in certain physiological processes, says Schuman. Further, her own work in the last couple of years has shown that long-term potentiation can occur even when neurons are not directly connected to one another, presumably because NO is a gas that can diffuse between neurons. Evidence has pointed to nitric oxide as a component in this mechanism despite the fact that rats with a defective gene for manufacturing a closely related form of nitric oxide synthase known as nNOS have no problems with long-term potentiation.

The new study shows that eNOS, however, is crucial in the mediation of signals between neurons. The authors demonstrated this by manipulating a common virus in such a way that it performed like a "Trojan horse." The region of the virus responsible for illness was eliminated, and the gene inserted into the virus was chosen for its action on brain chemistry. The virus infected the neurons and forced the cells to manufacture the protein encoded for by the inserted gene.

One viral construct blocked the function of eNOS in the hippocampus of the rodents, while another restored the eNOS function. The end results showed that eNOS is crucial for long-term potentiation.

Schuman says that while there is no immediate application for the finding, the greater molecular understanding of how brain cells change their properties is an important basic result in itself. Too, the use of viral vectors in understanding brain chemistry is a new approach, and somewhere down the line might be considered as a strategy for gene therapy.

"This gives us a good idea of a model for how brain cells change during learning," Schuman says.

Also involved in the work are Caltech neuroscientists David B. Kantor, Markus Lanzrein, Gisela M. Sandoval, W. Bryan Smith, S. Jennifer Stary, and Brian M. Sullivan.

Writer: 
Robert Tindol
Writer: 

Pages

Subscribe to RSS - research_news