Caltech Scientists Find Evidence For Massive Ice Age When Earth Was 2.4 billion Years Old

PASADENA— Those who think the winter of '97 was rough should be relieved that they weren't around 2.2 billion years ago. Scientists have discovered evidence for an ice age at the time that was severe enough to partially freeze over the equator. In today's new issue of Nature, California Institute of Technology geologists Dave Evans and Joseph Kirschvink report evidence that glaciers came within a few degrees of the equator's latitude when the planet was about 2.4 billion years old. They base their conclusion on glacial deposits discovered in present-day South Africa, plus magnetic evidence showing where South Africa's crustal plate was located at that time.

Based on that evidence, the Caltech researchers think they have documented the extremely rare "Snowball Earth" phenomenon, in which virtually the entire planet may have been covered in ice and snow. According to Kirschvink, who originally proposed the Snowball Earth theory, there have probably been only two episodes in which glaciation of the planet reached such an extent — one less than a billion years ago during the Neoproterozoic Era, and the one that has now been discovered from the Paleoproterozoic Era 2.2 billion years ago.

"The young Earth didn't catch a cold very often," says Evans, a graduate student in Kirschvink's lab. "But when it did, it seems to have been pretty severe."

The researchers collected their data by drilling rock specimens in South Africa and carefully recording the magnetic directions of the samples. From this information, the researchers then computed the direction and distance to the ancient north and south poles.

The conclusion was that the place in which they were drilling was 11 degrees (plus or minus five degrees) from the equator when Earth was 2.4 billion years old. Plate tectonic motions since that time have caused South Africa to drift all over the planet, to its current position at about 30 degrees south latitude. Additional tests showed that the samples were from glacial deposits, and further, were characteristic of a widespread region.

Kirschvink and Evans say that the preliminary implications are that Earth can somehow manage to pull itself out of a period of severe glaciation. Because ice and snow tend to reflect sunlight much better than land and water, Earth would normally be expect to have a hard time reheating itself in order to leave an ice age. Thus, one would expect a Snowball Earth to remain forever.

Yet, the planet obviously recovered both times from the severe glaciation. "We think it is likely that the intricacies of global climate feedback are not yet completely understood, especially concerning major departures from today's climate," says Evans. "If the Snowball Earth model is correct, then our planet has a remarkable resilience to abrupt shifts in climate.

"Somehow, the planet recovered from these ice ages, probably as a result of increased carbon dioxide — the main greenhouse gas."

Evans says that an asteroid or comet impact could have caused carbon dioxide to pour into the atmosphere, allowing Earth to trap solar energy and reheat itself. But evidence of an impact during this age, such as a remote crater, is lacking.

Large volcanic outpourings could also have released a lot of carbon dioxide, as well as other factors, such as sedimentary processes and biological factors.

At any rate, the evidence for the robustness of the planet and the life that inhabits it is encouraging, the researchers say. Not only did Earth pull itself out of both periods of severe glaciation, but many of the single-celled organisms that existed at the time managed to persevere.

Robert Tindol

State-of-the-Art Seismic Network Gets First Trial-by-Fire During This Morning's 5.4-magnitude Earthquake

PASADENA—Los Angeles reporters and camera crews responding to a 5.4-magnitude earthquake this morning got their first look at the new Caltech/USGS earthquake monitoring system.

The look was not only new but almost instantaneous. Within 15 minutes of the earthquake, Caltech seismologists had already printed out a full-color poster-sized map of the region to show on live TV, and had already posted the contour map on the Internet. Moreover, they were able to determine the magnitude of the event within five minutes — a tremendous improvement over the time it once took to confirm data.

"Today, we had a much better picture of how the ground responded to the earthquake than we've ever had in the past," said Dr. Lucile Jones, a U.S. Geological Survey seismologist who is stationed at Caltech. "This was the largest earthquake we've had since September of 1995, and was the first time we've been able to use the new instruments that we're still installing."

The new instruments are made possible by the TriNet Project, a $20.75-million initiative for providing a state-of-the-art monitoring network for Southern California. A scientific collaboration between Caltech, the USGS and the California Department of Conservation's Division of Mines and Geology, the project is designed to provide real-time earthquake monitoring and, ultimately, to lead to early-warning technology to save lives and mitigate urban damage after earthquakes occur.

"The idea of Trinet was to get quick locations and magnitudes out, to get quick estimates of the distribution of the ground shaking, and a prototype early-warning system," Caltech seismic analyst Egill Hauksson said an hour after this morning's earthquake. "The first two of those things are already in progress. We are in the midst of deploying hardware in the field and developing data-processing software." TriNet was announced earlier this year when funding was approved by the Federal Emergency Management Agency. The new system relies heavily on recent advances in computer communications technology and data processing.

The map printed out this morning (the ShakeMap) is just a preview of future TriNet products. Caltech seismologist Kate Hutton gave a number of TV interviews in front of the map this morning. The map was noteworthy not only for the speed in which it was produced, but also for the manner in which information about the earthquake was relayed.

Instead of charting magnitudes, the map was drawn in such a way that the velocity the ground moved was shown with contour lines. The most rapid movement in the 5.4 earthquake this morning was about two inches per second at the epicenter, and this was clearly indicated in the innermost circle on the color map. Moving outward from the epicenter of the earthquake, the velocity of ground movement decreased, and this was indicated by lower velocity numbers in the outer circles.

The maps can also be printed out to show ground accelerations, which are especially useful for ascertaining likely damage in an earthquake area, Hutton said.

Later, the TriNet will result in prototype early warnings to distant locations in the Los Angeles area that potentially damaging ground shaking is on the way. After an earthquake occurs, the seismic waves travel a few kilometers per second, while communication transmission can travel the speed of light. Thus, Los Angeles could eventually receive warning of a major earthquake at the San Andreas fault some 30 to 60 seconds before the heavy shaking actually began in the city.

The total cost of the project is $20.75 million. FEMA will provide $12.75 million, the USGS has provided $4.0 million, and the balance is to be matched by Caltech ($2.5 million) and the DOC ($1.75 million). Several private sector partners, including GTE and Pacific Bell, are assisting Caltech with matching funds for its portion of the TriNet balance.

The TriNet Project is being built upon existing networks and collaborations. Southern California's first digital network began with the installation of seismographs known as TERRAscope, and was made possible by a grant from the L.K. Whittier Foundation and the ARCO Foundation. Also, Pacific Bell through its CalREN Program has provided new frame-relay digital communications technology.

A major step in the modernization came in response to the Northridge earthquake, when the USGS received $4.0 million from funds appropriated by Congress to the National Earthquake Hazard Reduction Program. This money was the first step in the TriNet project and the USGS has been working with Caltech for the last 27 months to begin design and implementation. Significant progress has already been made and new instrumentation is now operational:

o Thirty state-of-the-art digital seismic stations are operating with continuous communication to Caltech/USGS

o Twenty strong-motion sites installed near critical structures

o Two high-rise buildings have been instrumented

o Alarming and processing software have been designed and implemented

o Automated maps of contoured ground shaking are available on the Web within a few minutes after felt and damaging earthquakes (

DOC's strong motion network in Southern California is a key component of the TriNet Project, contributing 400 of the network's 650 sensing stations. DOC's network expansion and upgrade through the funding of this project will allow much better information about strong shaking than was possible for the Northridge earthquake. This data is the key to improving building codes for more earthquake-resistant structures.

Robert Tindol

Caltech Question of the Week: Do Earth's Plates Move In a Certain Direction?

Submitted by Frank Cheng, Alhambra, California, and answered by Joann Stock, Associate Professor of Geology and Geophysics, Caltech.

Each plate is moving in a different direction, but the exact direction depends on the "reference frame," or viewpoint, in which you are looking at the motion. The background to this question is the fact that there are 14 major tectonic plates on Earth: the Pacific, North America, South America, Eurasia, India, Australia, Africa, Antarctica, Cocos, Nazca, Juan de Fuca, Caribbean, Philippine, and Arabia.

Each plate is considered to be "rigid," which means that the plate is moving as a single unit on the surface of Earth. We can describe the relative motion between any pair of plates. For example, the North America plate and the Eurasia plate are moving away from each other in the North Atlantic Ocean, resulting in seafloor spreading along the mid-Atlantic ridge, which is the boundary between these two plates. In this case, if you imagine Eurasia to be fixed, the North America plate would be moving west.

But it is equally valid to imagine that the North America plate is fixed, in which case the Eurasia plate would be moving east. If you think about the Pacific–North America plate boundary (along the San Andreas fault in Southern California), the motion of the North America plate is different; the North America plate is moving southeast relative to the Pacific plate.

This doesn't mean that the North America plate is moving in different directions at once. The difference is due to the change of reference frame, from the Eurasia plate to the Pacific plate.

Sometimes we describe plate motions in terms of other reference frames that are independent of the individual plates, such as some external (celestial) reference frame or more slowly moving regions of Earth's interior. In this case, each plate has a unique motion, which may change slowly over millions of years.

Technically, the plate motion in any reference frame is described by an angular velocity vector. This corresponds to the slow rotation of the plate about an axis that goes from Earth's center along an imaginary line to the "pole" of rotation somewhere on Earth's surface.

Robert Tindol

Researchers Establish Upper Limit of Temperature at the Core-mantle Boundary of Earth

PASADENA— Researchers at the California Institute of Technology have determined that Earth's mantle reaches a maximum temperature of 4,300 degrees Kelvin. The results are reported in the March 14, 1997, issue of the journal Science.

According to geophysics professor Tom Ahrens and graduate student Kathleen Holland, the results are important for setting very reliable bounds on the temperature of Earth's interior. Scientists need to know very precisely the temperature at various depths in order to better understand large-scale processes such as plate tectonics and volcanic activity, which involves movement of molten rock from the deep interior of the Earth to the surface.

"This nails down the maximum temperature of the lower-most mantle, a rocky layer extending from a depth of 10 to 30 kilometers to a depth of 2900 kilometers, where the molten iron core begins," Ahrens says. "We know from seismic data that the mantle is solid, so it has to be at a lower temperature than the melting temperature of the materials that make it up."

In effect, the research establishes the melting temperature of the high-pressure form of the crystal olivine. At normal pressures, olivine is known by the formula (Mg,Fe)2SiO4, and is a semiprecious translucent green gem. At very high pressures, olivine breaks down into magnesiowüstite and a mineral with the perovskite structure. Together these two minerals are thought to make up the bulk of the materials in the lower mantle.

The researchers achieved these ultra-high pressures in their samples by propagating a shock wave into them, using a high-powered cannon apparatus, called a light-gas gun. This gun launches projectiles at speeds of up to 7 km/sec. Upon impact with the sample, a strong shock wave causes ultra-high pressures to be achieved for only about one-half a millionth of a second. The researchers have established the melting temperature at a pressure of 1.3 million atmospheres. This is the pressure at the boundary of the solid lower mantle and liquid outer core.

"We have replicated the melting which we think occurs in the deepest mantle of the Earth," says Holland, a doctoral candidate in geophysics at Caltech. "This study shows that material in the deep mantle can melt at a much lower temperature than had been previously estimated. It is exciting that we can measure phase transitions at these ultra-high pressures."

The researchers further note that the temperature of 4,300 degrees would allow partial melting in the lowest 40 kilometers or so of the lower mantle. This agrees well with seismic analysis of wave forms conducted in 1996 by Caltech Professor of Seismology, Donald Helmberger, and his former graduate student, Edward Garnero. Their research suggests that at the very lowest reaches of the mantle there is a partially molten layer, called the Ultra-Low-Velocity-Zone.

"We're getting into explaining how such a thin layer of molten rock could exist at great depth," says Ahrens. "This layer may be the origin layer that feeds mantle plumes, the volcanic edifices such as the Hawaiian island chain and Iceland. "We want to understand how Earth works."

Robert Tindol

Caltech Geologists Find New Evidence That Martian Meteorite Could Have Harbored Life

PASADENA—Geologists studying Martian meteorite ALH84001 have found new support for the possibility that the rock could once have harbored life.

Moreover, the conclusions of California Institute of Technology researchers Joseph L. Kirschvink and Altair T. Maine, and McGill University's Hojatollah Vali, also suggest that Mars had a substantial magnetic field early in its history.

Finally, the new results suggest that any life on the rock existing when it was ejected from Mars could have survived the trip to Earth.

In an article appearing in the March 13 issue of the journal Science, the researchers report that their findings have effectively resolved a controversy about the meteorite that has raged since evidence for Martian life was first presented in 1996. Even before this report, other scientists suggested that the carbonate globules containing the possible Martian fossils had formed at temperatures far too hot for life to survive. All objects found on the meteorite, then, would have to be inorganic.

However, based on magnetic evidence, Kirschvink and his colleagues say that the rock has certainly not been hotter than 350 degrees Celsius in the past four billion years—and probably has not been above the boiling point of water. At these low temperatures, bacterial organisms could conceivably survive.

"Our research doesn't directly address the presence of life," says Kirschvink. "But if our results had gone the other way, the high-temperature scenario would have been supported."

Kirschvink's team began their research on the meteorite by sawing a tiny sample in two and then determining the direction of the magnetic field held by each. This work required the use of an ultrasensitive superconducting magnetometer system, housed in a unique, nonmagnetic clean lab facility. The team's results showed that the sample in which the carbonate material was found had two magnetic directions—one on each side of the fractures.

The distinct magnetic directions are critical to the findings, because any weakly magnetized rock will reorient its magnetism to be aligned with the local field direction after it has been heated to high temperatures and cooled. If two such rock fragments are attached so that their magnetic directions are separate, but are then heated to a certain critical temperature, they will have a uniform direction.

The igneous rock (called pyroxenite) that makes up the bulk of the meteorite contains small inclusions of magnetic iron sulfide minerals that will entirely realign their field directions at about 350°C, and will partially align the field directions at much lower temperatures. Thus, the researchers have concluded that the rock has never been heated substantially since it last cooled some four billion years ago.

"We should have been able to detect even a brief heating event over 100 degrees Celsius," Kirschvink says. "And we didn't."

These results also imply that Mars must have had a magnetic field similar in strength to that of the present Earth when the rock last cooled. This is very important for the evolution of life, as the magnetic field will protect the early atmosphere of a planet from being sputtered away into space by the solar wind. Mars has since lost its strong magnetic field, and its atmosphere is nearly gone.

The fracture surfaces on the meteorite formed after it cooled, during an impact event on Mars that crushed the interior portion. The carbonate globules that contain putative evidence for life formed later on these fracture surfaces, and thus were never exposed to high temperatures, even during their ejection from the Martian surface nearly 15 million years ago, presumably from another large asteroid or comet impact.

A further conclusion one can reach from Kirschvink's work is that the inside of the meteorite never reached high temperatures when it entered Earth's atmosphere. This means, in effect, that any remaining life on the Martian meteorite could have survived the trip from Mars to Earth (which can take as little as a year, according to some dynamic studies), and could have ridden the meteorite down through the atmosphere by residing in the interior cracks of the rock and been deposited safely on Earth.

"An implication of our study is that you could get life from Mars to Earth periodically," Kirschvink says. "In fact, every major impact could do it." Kirschvink's suggested history of the rock is as follows:

The rock crystallized from an igneous melt some 4.5 billion years ago and spent about half a billion years on the primordial planet, being subjected to a series of impact-related metamorphic events, which included formation of the iron sulfide minerals.

After final cooling in the ancient Martian magnetic field about four billion years ago, the rock would have had a single magnetic field direction. Following this, another impact crushed parts of the meteorite without heating it, and caused some of the grains in the interior to rotate relative to each other. This led to a separation of their magnetic directions and produced a set of fracture cracks. Aqueous fluids later percolated through these cracks, perhaps providing a substrate for the growth of Martian bacteria. The rock then sat more or less undisturbed until a huge asteroid or comet smacked into Mars 15 million years ago. The rock wandered in space until about 13,000 years ago, when it fell on the Antarctic ice sheet.

Robert Tindol

Scientists Find "Good Intentions" in the Brain

PASADENA—Neurobiologists at the California Institute of Technology have succeeded in peeking into one of the many "black boxes" of the primate brain. A study appearing in the March 13 issue of the journal Nature describes an area of the brain where plans for actions are formed.

It has long been known that we gain information through our senses and then respond to our world with actions via body movements. Our brains are organized accordingly, with some sections processing incoming sensory signals such as sights and sounds, and other sections regulating motor outputs such as walking, talking, looking, and reaching. What has puzzled scientists, however, is where in the brain thought is put into action. Presumably there must be an area between the sensory incoming areas and the motor outputting areas that decides or determines what we will do next.

Richard Andersen, James G. Boswell Professor of Neuroscience at Caltech, along with Senior Research Fellow Larry Snyder and graduate student Aaron Batista, chose the posterior parietal cortex as the likely candidate to perform such decisions. This is a high-functioning cognitive area and is the endpoint of what scientists call the visual "where" pathway. Lesions to the parietal cortex of humans result in loss of the ability to appreciate spatial relationships and to navigate accurately.

As Michael Shadlen of the University of Washington says in theNature "News and Views" commentary on the latest findings, "Nowhere in the brain is the connection between body and mind so conspicuous as in the parietal lobes—damage to the parietal cortex disrupts awareness of one's body and the space that it inhabits."

It is here, Andersen postulates, that incoming sensory signals overlap with outgoing movement commands, and it is here where decisions and planning occur. Numerous investigations had assumed a sensory map of external space must exist within the parietal cortex, so that certain subsections would be responsible for certain spatial locations of objects such as "up and to the left" or "down and to the right." Previous results from Andersen's own lab however had led him to question whether absolute space was the driving feature of the posterior parietal map or whether, instead, the intended movement plan was the determining factor in organizing the area.

In a series of experiments designed so that the scientists could "listen in" on the brain cells of monkeys at work, the animals were taught to watch a signal light and, depending on its color, to either reach to or look at the target. When the signal was green they were to reach and when it was red they were only to look at the target. An important additional twist to the study was that the monkeys had to withhold their responses for over a second.

The scientists measured neural activity during this delay when the monkeys had planned the movement but not yet made it. What they found was that different cells within different regions of the posterior parietal cortex became active, depending not so much on where the objects were but rather on which movements were required to obtain them. It seems then that the same visual input activates different subareas depending on how the animal plans to respond.

According to Andersen, this result shows that the pathway through the visual cortex that tells us where things are, ends in a map of intention rather than a map of sensory space as had been previously thought. According to Shadlen these results are intriguing because they indicate that "for the brain, spatial location is not a mathematical abstraction or property of a (sensory) map, but involves the issue of how the body navigates its hand or gaze." Andersen feels the study is important because it demonstrates that "our thoughts are more directly tied to our actions than we had previously imagined, and the posterior parietal cortex appears to be organized more around our intentions than our sensations."

Robert Tindol

Caltech Chemists Design Molecule To Repair a Type of DNA Damage

PASADENA—Chemists have found a way to repair DNA molecules that have been damaged by ultraviolet radiation. The research is reported in the March 7, 1997, issue of the journal Science.

In the cover article, California Institute of Technology Professor of Chemistry Jacqueline K. Barton and her coworkers Peter J. Dandliker, a postdoctoral associate, and R. Erik Holmlin, a graduate student, report that the new procedure reverses thymine dimers, a well-known type of DNA abnormality caused by exposure to ultraviolet light. By designing a synthetic molecule containing rhodium, the researchers have succeeded in repairing the damage and returning the DNA to its normal state.

The research is also significant in that the rhodium complex can be attached to the end of the DNA strand and repair the damaged site even when it is much farther up the helix.

"What I think is exciting is that we can use the DNA to carry out chemistry at a distance," says Barton. "What we're really doing is transferring information along the helix."

A healthy DNA molecule appears something like a twisted ladder. The two "rails" of the ladder, the DNA backbone are connected with "rungs," the DNA bases adenine, thymine, cytosine and guanine, which are paired together in units called base pairs to form the helical stack.

Thymine dimers occur when two neighboring thymines on the same strand become linked together. The dimer, once formed, leads to mutations because of mispairings when new DNA is made. If the thymine dimers are not repaired, mutations and cancer can result.

The new method repairs the thymine dimers at the very first stage, before mutations can develop. The rhodium complex is exposed to normal visible light, which triggers an electron transfer reaction to repair the thymine dimer. The rhodium complex can either act locally on a thymine dimer lesion on the DNA strand, or can be tethered to the end of the DNA helix to work at a distance.

In the latter case, the electron works its way through the stack of base pairs. The repair efficiency doesn't decrease as the tether point is moved away from the site of damage, the researchers have found. However, the efficiency of the reaction is diminished when the base pair stack, the pathway for electron transfer, is disrupted.

"This argues that the radical, or electron hole, is migrating through the base pairs," Barton says. "Whether electron transfer reactions on DNA also occur in nature is something we need to find out. We have found that this feature of DNA allows one to carry out chemical reactions from a distance."

Barton cautions that the discovery does not represent a new form of chemotherapy. However, the research could point to new protocols for dealing with the molecular changes that precede mutations and cancer.

"This could give us a framework to consider new strategies," she says. This research was funded by the National Institutes of Health. Dandliker is a fellow of the Cancer Research Fund of the Damon Runyon-Walter Winchell Foundation, and Holmlin is a National Science Foundation predoctoral fellow.

Robert Tindol

Question of the Week: Why Does an Engine Cooling System Have a Thermostat, and Hos Does It Relate To the Coolant Flow Rate?

Question of the Month Submitted by Bill McLellan, Pasadena, California, and answered by Melany Hunt, Associate Professor of Mechanical Engineering, Caltech.

The cooling system is an important part of an automobile engine. I've certainly become more aware of this fact after having my car overheat on the Santa Monica Freeway.

The cooling system serves three important functions. First, it removes excess heat from the engine; second, it maintains the engine operating temperature where it works most efficiently; and finally, it brings the engine up to the right operating temperature as quickly as possible.

The cooling system is composed of six main parts—an engine, a radiator, a water pump, a cooling fan, hoses, and a thermostat. During the combustion process, some of the fuel energy is converted into heat. This heat is transferred to the coolant being circulated through the engine by the water pump. Hoses carry the hot coolant to the radiator, where the heat is transferred to air that is pulled past the engine by the cooling fan. The coolant is then carried back to the water pump and recirculated.

When an engine is cold, such as first thing in the morning, the engine operates a bit differently. To maximize efficiency, the engine is designed to warm up quickly. Once the engine reaches the right operating temperature, the engine is designed to be maintained at a stable temperature, which is the purpose of the thermostat. The thermostat is like a valve that opens and closes as a function of its temperature. The thermostat isolates the engine from the radiator until it has reached a certain minimum temperature. Without a thermostat, the engine would always lose heat to the radiator and take longer to warm up. Once the engine has reached the desired operating temperature, the thermostat adjusts flow to the radiator to maintain a stable temperature.

Sometimes, the coolant is so hot that the thermostat opens all the way, making the engine completely dependent on the radiator to keep its temperature stable. As long as there is enough air flow through the radiator, the engine will stay cool. If for some reason the air flow rate is too low, the radiator won't do its job and the engine may overheat. At this point, if the coolant flow rate is increased, the engine will then transfer more heat to the coolant, which will exacerbate the situation. The thermostat flow restriction helps to increase the pressure in the cooling system, which makes it harder for the coolant to boil in the water pump. However, it does little to help the radiator keep the engine cool.

Question of the Week: What Causes a Gene To Mutate or Change?

Submitted by Virginia Salazar, Whittier, Calif. and answered by Dr. Paul Sternberg, Professor of Biology, Caltech

In most cases, the sequence of DNA making up a gene is copied accurately when a cell divides. This accurate process ensures that each cell is like its parent cell. DNA consists of a string of DNA bases, the letters in the genetic alphabet.

The bad news is that DNA is under continual attack by chemicals within the cell that are byproducts of the ordinary workings of each cell; by environmental hazards; by radiation; and by the general tendency for things to break down. Environmental hazards include natural plant products as well as human-made chemicals. These attacks result in a range of problems, ranging from changes of a single DNA letter to a break in the string.

The good news is that cells counter these continual attacks by correcting essentially all the damage, using a host of beautiful molecular machines. But a mutation occurs when a cell fails to repair damage to its DNA, or repairs it incorrectly. When such a cell divides, it passes on the mutated gene to its progeny. Eggs and sperm, which join to form an embryo, are themselves the product of cell divisions and thus subject to errors in the copying of DNA. These mutations are passed on to our children.

Other cells in our bodies are subject to mutation, and mutant cells can become cancerous. Particularly pernicious are mutations that disrupt the ability of a cell to repair its own DNA. Such mutations are in the genes that are responsible for making the repair machinery. When this occurs, the mutant cell will more easily continue to mutate, a disaster in the making!

Robert Tindol

Question of the Week: How Often Do Meteors Fall To Earth?

Question of the Month Submitted by Bob and Pat Gaskill, Orange County, and answered by Dr. William Bottke, Texaco Prize Fellow, Division of Geological and Planetary Sciences, Caltech.

Meteors and meteorites are small rocky fragments of other planetary bodies that fall to Earth. When they do so, they often produce spectacular audible and visual effects that can be seen from the ground. Meteorites, objects that survive their fiery passage through Earth's atmosphere, are of particular interest to scientists, since they are pieces of planetary bodies (mostly asteroids) for which samples have not yet been obtained through either manned or unmanned space missions. The oldest meteorites are remnants of the very first processes to occur in our solar system 4.6 billion years ago, giving us a glimpse into what conditions were when Earth was formed.

One common class of meteor is called a "fireball," named for the bright, streaming orbs produced when the surface of a fist-sized or larger body is boiled away by friction as it enters Earth's atmosphere. Fireballs decelerate from speeds of about 60,000 m.p.h. to 200 m.p.h. during this passage, often slowing enough at the end so that they literally drop to the ground. Their flight path is similar to a golf ball thrown at an angle into a swimming pool; once the water stops the forward momentum of the ball, it sinks to the bottom of the pool. The meteor is often not strong enough to survive this passage intact, which can make recovery of the fragments difficult.

Fireballs are mostly seen crossing the sky at night, though some are so bright they can be seen during the day. When a fireball is seen, it is usually several miles high. If any surviving meteoritic pieces were to survive to reach the ground, they would probably be over 500 miles from the observer. If enough people see the fireball from separate locations, however, scientists may be able to calculate where the fragments should strike Earth.

Studies indicate that about 25 meteorites weighing more than a fifth of a pound fall on California (or an area of equal size) each year. Three or four of these samples weigh about two pounds and are the size of your fist. Using these values, we can estimate that between 300 and 400 of these larger meteorites have fallen on California since the turn of the century. Most of these rocks, though, have not been found, leaving open the possibility that you yourself may discover one someday.


Subscribe to RSS - research_news