Question of the Week: Does the earth keep a constant distance from the sun? If not, will the earth get closer to the sun and become more warm?

Question: Does the earth keep a constant distance from the sun? If not, will the earth get closer to the sun and become more warm?

Submitted by Steven S. Showers Newbury Park, California

Answered by Andrew Ingersoll, professor of planetary science, Caltech.

Earth has an eccentric orbit, which means that it moves in a path that is slightly oval in shape. Contrary to what you'd expect, Earth gets closest to the sun every December, and farthest from the sun every June. We in the northerhe maximum eccentricity is about 5 percent and the minimum is near zero, when the orbit is nearly circular. This cycle can be calculated for millions of years, and we know that the glaciers also have cycles of about 100,000 years. The question is whether the glaciers are tied to changes in Earth's eccentricity.

So the bottom line is that Earth does get closer and farther, and it does affect the climate. But the mechanism is not all that clear. Averaged over a year, the distance from the Earth the Sun changes very little, even over billions of years (the Earth is 4.5 billion years old).

What is more important to climatic changes over the eons is the fact that the sun is getting brighter. Earth now gets about 20 to 25 percent more sunlight than it did four billion years ago.

Writer: 
Robert Tindol
Writer: 

Caltech Astronomer Obtains Data That Could Resolve the "Age Problem"

PASADENA — A California Institute of Technology astronomer has obtained data that could resolve the "age problem" of the universe, in which certain stars appear to be older than the universe itself.

Dr. Neill Reid, using information collected by the European Space Agency's Hipparcos satellite, has determined that a key distance measure used to compute the age of certain Milky Way stars is off by 10 to 15 percent. The new data leads to the conclusion that the oldest stars are actually 11 to 13 billion years old, rather than 16 to 18 billion years old, as had been thought.

The new results will be of great interest to cosmologists, Reid says, because estimates of the age of the universe, based on tracking back the current rate of expansion, suggest that the Big Bang occurred no more than about 13 billion years ago. Therefore, astronomers will no longer be confronted with the nettling discrepancy between the ages of stars and the age of the universe.

"This gives us an alternate way of estimating the age of the universe," says Reid. "The ideal situation would be to have the same answer, independently given by stellar modeling and cosmology."

Reid's method focuses on a type of star (known as subdwarfs) found in globular clusters, which are spherical accumulations of hundreds of thousands of individual stars. These have long been known to be among the earliest objects to form in the universe, since the stars are composed mainly of the primordial elements hydrogen and helium, and because the clusters themselves are distributed throughout a sphere 100,000 light-years in diameter, rather than confined, like the sun, within the flattened pancake of the galactic disk. Astronomers can determine quantitative ages for the clusters by measuring the luminosity (the intrinsic brightness) of the brightest sunlike stars in each cluster. Those measurements require that the distances to the clusters be known accurately.

Reid looked at some 30 stars within about 200 light-years of Earth. Using the Hipparcos satellite, he was able to obtain very accurate distances to these stars by the parallax method. Parallax is a common method for determining relatively nearby objects. Just as a tree 10 feet away will seem to shift its position against the distant background when an observer closes one eye and then the other, a nearby star will shift its position slightly if the observer waits six months for Earth to reach the opposite side of its orbit. And if the distance between the two observing sites (the baseline) is known very accurately, the observer can then compute the distance to the object by treating the object and the two observing sites as a giant triangle.

Reid chose the 30 stars for special study (out of the 100,000 for which Hipparcos obtained parallax data) because they, like the globular cluster stars, are composed primarily of hydrogen and helium. Thus, these stars also can be assumed to be very old, and may indeed themselves once have been members of globulars that were torn apart as they orbited the galaxy.

Once distances have been measured, these nearby stars act as standard candles whose brightness can be compared to similar stars in the globular clusters. While this is a well-known technique, older investigations were only able to use lower-accuracy, pre-Hipparcos parallaxes for 10 of the 30 stars.

Reid's conclusion is that the clusters are about 10 to 15 percent farther from Earth than previously thought. This, in turn, means that the stars in those clusters are actually about 20 percent brighter than previously thought, because luminosity falls off as distance increases. Brighter stars have shorter lifetimes, so this means that the clusters themselves must be younger than once assumed.

British astronomers Michael Feast and Robin Catchpole recently arrived at very similar conclusions, also based on new data from Hipparcos, but using a different, and less direct, line of argument. They used new measurements of a type of variable known as Cepheids to determine a revised distance to the Large Magellanic Cloud, a galaxy orbiting the Milky Way.

Feast and Catchpole used another type of variable star, the RR Lyrae variables, to bridge between the LMC and globular clusters. The fact that these two independent methods give the same answer makes that answer more believable, says Reid. "Most people previously believed that 14 billion years was the youngest age you could have for these stars," Reid says. "I think it's now accurate to say that the oldest you could make them is 14 billion years.

"No longer are we faced with the paradox of a universe younger than its stellar constituents," says Reid.

The work is set to appear in July in the Astrophysical Journal.

Writer: 
Robert Tindol
Writer: 

Caltech A Major Partner in National Program To Develop Advanced Computational Infrastructure

PASADENA, California — The California Institute of Technology (Caltech) will play three key roles in the National Partnership for Advanced Computational Infrastructure (NPACI) in the areas of management, resource deployment, and technology and application initiatives. The NPACI program is one of two partnerships each awarded approximately $170 million over five years in the National Science Foundation's Partnerships for Advanced Computational Infrastructure (PACI) slated to begin October 1, 1997.

The NPACI partnership, led by the University of California, San Diego (UCSD), includes Caltech and 36 other leaders in high-performance computing. In NPACI, leading-edge scientists and engineers from 18 states will develop algorithms, libraries, system software, and tools in order to create a national metacomputing infrastructure of the future--one that will provide both teraflops and petabyte capabilility. (A petabyte is equal to one billion megabytes; one teraflops is one trillion computations per second.) A fully supported, teraflops/petabyte-scale metacomputing environment will enable quantitative and qualitative advances in a wide range of scientific disciplines, including astronomy, biochemistry, biology, chemistry, engineering, fluid dynamics, materials science, neuroscience, social science and behavioral science.

Dr. Paul Messina, Caltech's assistant vice president for scientific computing and director of Caltech's Center for Advanced Computing Research (CACR), has been named chief architect of NPACI. He will be responsible for the overall architecture of the project, including interaction mechanisms among the partners; deployment of infrastructure; and balance among partnership hardware and software systems, thrust area projects, and other user needs.

"Caltech is pleased to be a part of this historic initiative," said Thomas Everhart, president of Caltech. "Paul Messina and the CACR have made it possible for the Institute to contribute to the further development of our national computing infrastructure."

While Caltech will contribute to a variety of software development projects as a research partner, as a resource partner, the Institute will provide national access to some of the hardware used to facilitate the development of software infrastructure to link computers, data servers, and archival storage systems to enable easier use of the aggregate computing power. The NPACI award builds on the longstanding partnership between Caltech and NASA's Jet Propulsion Laboratory (JPL) in the area of high-performance computing. With both NSF and NASA support, Caltech will acquire a succession of parallel computers from Hewlett-Packard's Convex Division, including serial #1 of the HP/Convex SPP 3000, to be installed in 1999. Caltech's first machine, already housed on campus, will be a 256-processor Convex SPP 2000, with a peak speed of 184 Gflops, 64 GBytes of memory, and 1 TByte of online disk. (A 60-terabyte HPSS-based archival storage system will also be available to NPACI.)

"NPACI provides the opportunity to build a computational infrastructure that will enable scientific breakthroughs and new modes of computing," stated Dr. Messina. "As chief architect for the partnership, I look forward to the synergistic coupling of so many excellent scientists dedicated to creating an infrastructure that will profoundly impact future scientific endeavor by providing unprecedented new computational capabilities."

The NPACI metacomputing environment will consist of geographically separated, heterogeneous, high-performance computers, data servers, archival storage, and visualization systems linked together by high-speed networks so that their aggregate power may be applied to research problems that cannot be studied any other way. This environment will be extended to support "data-intensive computing." To that end, infrastructure will be developed to enable--for the first time--the analysis of multiple terabyte-sized data collections.

Installation of the Caltech HP Exemplar was made possible by a collaborative agreement between Caltech and Hewlett-Packard that will result in new technology, tools, and libraries to support the type of multidisciplinary research and metacomputing environment exemplified by the new NPACI award.

By providing access to new and emerging technologies for computing, networking, data storage, data acquisition, and archival functions as part of NPACI, Caltech's CACR will continue to pursue its application-driven approach to use and integrate new hardware and software technologies. In addition to the Caltech resources, UC Berkeley and UCSD will also provide alternate architectures initially to NPACI. These alternate architectures will be to explore alternate performance or price/performance regimes, facilitate porting of software, assure competitive pricing from vendors, and provide a ready migration path should one of the alternatives become preeminent.

As the infrastructure enhancements needed to create the computing environment of the future include far more than hardware and network connections, Caltech is also contributing to NPACI through its participation in "thrust area teams," which will focus on key technologies and applications required for metacomputing, such as data-intensive computing; adaptable, scalable tools/environments; and interaction environments. Many aspects of the NPACI technology thrusts build upon projects initiated and led by Caltech's CACR, including the CASA Gigabit Testbed and the Scalable I/O Initiative.

The Digital Sky Project, led by Caltech professor Thomas A. Prince, CACR associate director, will be a primary data-intensive computing effort in NPACI. This innovative project will make early use of the NPACI resources. Large-area digital sky surveys are a recent and exciting development in astronomy. The combination of the NPACI Tflops/Tbyte computational resources with the recent large-area sky surveys supported by NSF and NASA at optical, infrared, and radio wavelengths will provide unparalleled new capability for astronomical research. The Digital Sky Project, anticipated to be used by the entire astronomical community of over 5,000 scientists and students, will provide a qualitatively different computational infrastructure for astronomical research.

Dr. Carl Kukkonen, manager of supercomputing at JPL, reports that NASA is excited about the opportunity to work with NPACI at the forefront of computing technologies. "We will use these computing resources to tackle the most challenging issues in spacecraft design and space science data analysis," said Kukkonen.

Rounding out Caltech participation in NPACI are efforts by Caltech faculty in three different NPACI applications thrusts: engineering, neuroscience, and molecular science. Infrastructure development, more specifically, in the areas of materials science, brain/neuron modeling, and molecular science will be led by Caltech professors William Goddard, Ferkel Professor of Chemistry and Applied Physics; James Bower, Associate Professor of Biology; and Aron Kuppermann, Professor of Chemical Physics, respectively.

More details on NPACI can be obtained at http://cacse.ucsd.edu/npaci.html. See http://hpcc997.external.hp.com/pressrel/nov96/18nov96h.htm for a press release on the Caltech-HP/Convex collaboration.

Writer: 
Robert Tindol
Writer: 

Question of the Week: Who Invented the Equal Sign, and Why?

Submitted by Pat Orr, Altadena, California, and answered by Tom Apostol, Professor of Mathematics Emeritus, Caltech.

According to the VNR Concise Encyclopedia of Mathematics, the equal sign was invented by Robert Recorde, the Royal Court Physician for England's King Edward VI and Queen Mary. Recorde, who lived from 1510 to 1558, was the most influential English mathematician of his day and, among other things, introduced algebra to his countrymen. He died in prison, although the record does not state why he was incarcerated. Hopefully, it had nothing to do with his mathematical activities.

Recorde first proposed the equal sign in a 1557 book named The Whetstone of Witte, in which he says, "And to avoide the tediouse repetition of these woordes: is equalle to: I will sette as I doe often in woorke use, a paire of paralleles, of Gemowe (or twin) lines of one lengthe, thus: = = = = = =, bicause noe 2 thynges can be moare equalle."

In addition to The Whetstone of Witte, Recorde also wrote a book on arithmetic called The Grounde of Artes (c. 1542), a book on popular astronomy called The Castle of Knowledge (1551), and a book on Euclidian geometry called The Pathewaie to Knowledge (1551). The Encyclopædia Britannica notes that Whetstone was his most influential work.

As for the reason he invented the equal sign, I think his own words say it best, if you can decipher the archaic language and odd spellings. Equal signs let us avoid tedious repetition and, as such, allow a shorthand symbol to show how unknown quantities relate to known quantities.

Writer: 
Robert Tindol
Writer: 

Caltech Scientists Invent Polymer For Detecting Blood Glucose

PASADENA— Scientists have designed a polymer that could vastly improve the way diabetics measure their blood glucose levels. The polymer is described in the current issue of Nature Biotechnology.

According to Dr. Frances Arnold, a professor of chemical engineering at the California Institute of Technology, the polymer is superior to the current enzyme-based glucose detectors because it is not of biological origin. The polymer will be easier to make and thus lead to cheaper and more reliable glucose sensors.

"This has the potential to help a lot of people, and that's what I find exciting," says Arnold. "A 1993 clinical study showed that if you monitor glucose carefully, the serious complications of diabetes such as gangrene and retinal damage could be reduced by 65 percent."

Arnold believes that her invention will improve the monitoring of glucose, especially for patients in developing countries of the world. Depending on the mechanisms devised for patient use, the polymer will likely be easy and cheap to manufacture and use, which could simplify and widen the practice of frequent testing of blood glucose throughout the day—a practice that many experts say is important to minimizing the complications of diabetes.

Also, the polymer will be more chemically stable and possibly less immunogenic in the human body than the enzymes currently available for glucose monitoring. This could make it more reliable for use in biosensors that remain in the body for extended periods.

At the heart of the polymer is a copper metal complex. The metal is held by a chelating agent that occupies three out of five or six possible "slots" for binding. The other two or three slots, however, can be used to indirectly measure glucose by examining the manner in which hydroxyl groups from the glucose bind.

"The net reaction with glucose is the release of a proton," Arnold explains. Ultimately, the polymer works with a pH meter because hydrogen ions are released from the polymer complex. More hydrogen ions means a more acidic solution (a lower pH), and an acidic response corresponds to high glucose levels in the blood.

And because the substance is nonbiological, it can bypass the blood's normal buffering capacity in order to work at optimal pH levels. This would allow for a simple and straightforward interaction with the blood that, coupled with the inexpensiveness of the materials, would allow for significant reductions in cost to the patient.

The cost reduction would be especially important in the Third World, where diabetes is on the rise.

Also involved in the research are Guohua Chen, a postdoctoral fellow at Caltech, and Vidyasankar Sundaresan, a Caltech graduate student. Former postdoctoral researchers on the project are Zhibin Guan and Chao-Tsen Chen.

Writer: 
Robert Tindol
Writer: 
Exclude from News Hub: 
No

Caltech Scientists Find Evidence For Massive Ice Age When Earth Was 2.4 billion Years Old

PASADENA— Those who think the winter of '97 was rough should be relieved that they weren't around 2.2 billion years ago. Scientists have discovered evidence for an ice age at the time that was severe enough to partially freeze over the equator. In today's new issue of Nature, California Institute of Technology geologists Dave Evans and Joseph Kirschvink report evidence that glaciers came within a few degrees of the equator's latitude when the planet was about 2.4 billion years old. They base their conclusion on glacial deposits discovered in present-day South Africa, plus magnetic evidence showing where South Africa's crustal plate was located at that time.

Based on that evidence, the Caltech researchers think they have documented the extremely rare "Snowball Earth" phenomenon, in which virtually the entire planet may have been covered in ice and snow. According to Kirschvink, who originally proposed the Snowball Earth theory, there have probably been only two episodes in which glaciation of the planet reached such an extent — one less than a billion years ago during the Neoproterozoic Era, and the one that has now been discovered from the Paleoproterozoic Era 2.2 billion years ago.

"The young Earth didn't catch a cold very often," says Evans, a graduate student in Kirschvink's lab. "But when it did, it seems to have been pretty severe."

The researchers collected their data by drilling rock specimens in South Africa and carefully recording the magnetic directions of the samples. From this information, the researchers then computed the direction and distance to the ancient north and south poles.

The conclusion was that the place in which they were drilling was 11 degrees (plus or minus five degrees) from the equator when Earth was 2.4 billion years old. Plate tectonic motions since that time have caused South Africa to drift all over the planet, to its current position at about 30 degrees south latitude. Additional tests showed that the samples were from glacial deposits, and further, were characteristic of a widespread region.

Kirschvink and Evans say that the preliminary implications are that Earth can somehow manage to pull itself out of a period of severe glaciation. Because ice and snow tend to reflect sunlight much better than land and water, Earth would normally be expect to have a hard time reheating itself in order to leave an ice age. Thus, one would expect a Snowball Earth to remain forever.

Yet, the planet obviously recovered both times from the severe glaciation. "We think it is likely that the intricacies of global climate feedback are not yet completely understood, especially concerning major departures from today's climate," says Evans. "If the Snowball Earth model is correct, then our planet has a remarkable resilience to abrupt shifts in climate.

"Somehow, the planet recovered from these ice ages, probably as a result of increased carbon dioxide — the main greenhouse gas."

Evans says that an asteroid or comet impact could have caused carbon dioxide to pour into the atmosphere, allowing Earth to trap solar energy and reheat itself. But evidence of an impact during this age, such as a remote crater, is lacking.

Large volcanic outpourings could also have released a lot of carbon dioxide, as well as other factors, such as sedimentary processes and biological factors.

At any rate, the evidence for the robustness of the planet and the life that inhabits it is encouraging, the researchers say. Not only did Earth pull itself out of both periods of severe glaciation, but many of the single-celled organisms that existed at the time managed to persevere.

Writer: 
Robert Tindol
Writer: 

State-of-the-Art Seismic Network Gets First Trial-by-Fire During This Morning's 5.4-magnitude Earthquake

PASADENA—Los Angeles reporters and camera crews responding to a 5.4-magnitude earthquake this morning got their first look at the new Caltech/USGS earthquake monitoring system.

The look was not only new but almost instantaneous. Within 15 minutes of the earthquake, Caltech seismologists had already printed out a full-color poster-sized map of the region to show on live TV, and had already posted the contour map on the Internet. Moreover, they were able to determine the magnitude of the event within five minutes — a tremendous improvement over the time it once took to confirm data.

"Today, we had a much better picture of how the ground responded to the earthquake than we've ever had in the past," said Dr. Lucile Jones, a U.S. Geological Survey seismologist who is stationed at Caltech. "This was the largest earthquake we've had since September of 1995, and was the first time we've been able to use the new instruments that we're still installing."

The new instruments are made possible by the TriNet Project, a $20.75-million initiative for providing a state-of-the-art monitoring network for Southern California. A scientific collaboration between Caltech, the USGS and the California Department of Conservation's Division of Mines and Geology, the project is designed to provide real-time earthquake monitoring and, ultimately, to lead to early-warning technology to save lives and mitigate urban damage after earthquakes occur.

"The idea of Trinet was to get quick locations and magnitudes out, to get quick estimates of the distribution of the ground shaking, and a prototype early-warning system," Caltech seismic analyst Egill Hauksson said an hour after this morning's earthquake. "The first two of those things are already in progress. We are in the midst of deploying hardware in the field and developing data-processing software." TriNet was announced earlier this year when funding was approved by the Federal Emergency Management Agency. The new system relies heavily on recent advances in computer communications technology and data processing.

The map printed out this morning (the ShakeMap) is just a preview of future TriNet products. Caltech seismologist Kate Hutton gave a number of TV interviews in front of the map this morning. The map was noteworthy not only for the speed in which it was produced, but also for the manner in which information about the earthquake was relayed.

Instead of charting magnitudes, the map was drawn in such a way that the velocity the ground moved was shown with contour lines. The most rapid movement in the 5.4 earthquake this morning was about two inches per second at the epicenter, and this was clearly indicated in the innermost circle on the color map. Moving outward from the epicenter of the earthquake, the velocity of ground movement decreased, and this was indicated by lower velocity numbers in the outer circles.

The maps can also be printed out to show ground accelerations, which are especially useful for ascertaining likely damage in an earthquake area, Hutton said.

Later, the TriNet will result in prototype early warnings to distant locations in the Los Angeles area that potentially damaging ground shaking is on the way. After an earthquake occurs, the seismic waves travel a few kilometers per second, while communication transmission can travel the speed of light. Thus, Los Angeles could eventually receive warning of a major earthquake at the San Andreas fault some 30 to 60 seconds before the heavy shaking actually began in the city.

The total cost of the project is $20.75 million. FEMA will provide $12.75 million, the USGS has provided $4.0 million, and the balance is to be matched by Caltech ($2.5 million) and the DOC ($1.75 million). Several private sector partners, including GTE and Pacific Bell, are assisting Caltech with matching funds for its portion of the TriNet balance.

The TriNet Project is being built upon existing networks and collaborations. Southern California's first digital network began with the installation of seismographs known as TERRAscope, and was made possible by a grant from the L.K. Whittier Foundation and the ARCO Foundation. Also, Pacific Bell through its CalREN Program has provided new frame-relay digital communications technology.

A major step in the modernization came in response to the Northridge earthquake, when the USGS received $4.0 million from funds appropriated by Congress to the National Earthquake Hazard Reduction Program. This money was the first step in the TriNet project and the USGS has been working with Caltech for the last 27 months to begin design and implementation. Significant progress has already been made and new instrumentation is now operational:

o Thirty state-of-the-art digital seismic stations are operating with continuous communication to Caltech/USGS

o Twenty strong-motion sites installed near critical structures

o Two high-rise buildings have been instrumented

o Alarming and processing software have been designed and implemented

o Automated maps of contoured ground shaking are available on the Web within a few minutes after felt and damaging earthquakes (http://www-socal.wr.usgs.gov).

DOC's strong motion network in Southern California is a key component of the TriNet Project, contributing 400 of the network's 650 sensing stations. DOC's network expansion and upgrade through the funding of this project will allow much better information about strong shaking than was possible for the Northridge earthquake. This data is the key to improving building codes for more earthquake-resistant structures.

Writer: 
Robert Tindol
Writer: 

Caltech Question of the Week: Do Earth's Plates Move In a Certain Direction?

Submitted by Frank Cheng, Alhambra, California, and answered by Joann Stock, Associate Professor of Geology and Geophysics, Caltech.

Each plate is moving in a different direction, but the exact direction depends on the "reference frame," or viewpoint, in which you are looking at the motion. The background to this question is the fact that there are 14 major tectonic plates on Earth: the Pacific, North America, South America, Eurasia, India, Australia, Africa, Antarctica, Cocos, Nazca, Juan de Fuca, Caribbean, Philippine, and Arabia.

Each plate is considered to be "rigid," which means that the plate is moving as a single unit on the surface of Earth. We can describe the relative motion between any pair of plates. For example, the North America plate and the Eurasia plate are moving away from each other in the North Atlantic Ocean, resulting in seafloor spreading along the mid-Atlantic ridge, which is the boundary between these two plates. In this case, if you imagine Eurasia to be fixed, the North America plate would be moving west.

But it is equally valid to imagine that the North America plate is fixed, in which case the Eurasia plate would be moving east. If you think about the Pacific–North America plate boundary (along the San Andreas fault in Southern California), the motion of the North America plate is different; the North America plate is moving southeast relative to the Pacific plate.

This doesn't mean that the North America plate is moving in different directions at once. The difference is due to the change of reference frame, from the Eurasia plate to the Pacific plate.

Sometimes we describe plate motions in terms of other reference frames that are independent of the individual plates, such as some external (celestial) reference frame or more slowly moving regions of Earth's interior. In this case, each plate has a unique motion, which may change slowly over millions of years.

Technically, the plate motion in any reference frame is described by an angular velocity vector. This corresponds to the slow rotation of the plate about an axis that goes from Earth's center along an imaginary line to the "pole" of rotation somewhere on Earth's surface.

Writer: 
Robert Tindol
Writer: 

Researchers Establish Upper Limit of Temperature at the Core-mantle Boundary of Earth

PASADENA— Researchers at the California Institute of Technology have determined that Earth's mantle reaches a maximum temperature of 4,300 degrees Kelvin. The results are reported in the March 14, 1997, issue of the journal Science.

According to geophysics professor Tom Ahrens and graduate student Kathleen Holland, the results are important for setting very reliable bounds on the temperature of Earth's interior. Scientists need to know very precisely the temperature at various depths in order to better understand large-scale processes such as plate tectonics and volcanic activity, which involves movement of molten rock from the deep interior of the Earth to the surface.

"This nails down the maximum temperature of the lower-most mantle, a rocky layer extending from a depth of 10 to 30 kilometers to a depth of 2900 kilometers, where the molten iron core begins," Ahrens says. "We know from seismic data that the mantle is solid, so it has to be at a lower temperature than the melting temperature of the materials that make it up."

In effect, the research establishes the melting temperature of the high-pressure form of the crystal olivine. At normal pressures, olivine is known by the formula (Mg,Fe)2SiO4, and is a semiprecious translucent green gem. At very high pressures, olivine breaks down into magnesiowüstite and a mineral with the perovskite structure. Together these two minerals are thought to make up the bulk of the materials in the lower mantle.

The researchers achieved these ultra-high pressures in their samples by propagating a shock wave into them, using a high-powered cannon apparatus, called a light-gas gun. This gun launches projectiles at speeds of up to 7 km/sec. Upon impact with the sample, a strong shock wave causes ultra-high pressures to be achieved for only about one-half a millionth of a second. The researchers have established the melting temperature at a pressure of 1.3 million atmospheres. This is the pressure at the boundary of the solid lower mantle and liquid outer core.

"We have replicated the melting which we think occurs in the deepest mantle of the Earth," says Holland, a doctoral candidate in geophysics at Caltech. "This study shows that material in the deep mantle can melt at a much lower temperature than had been previously estimated. It is exciting that we can measure phase transitions at these ultra-high pressures."

The researchers further note that the temperature of 4,300 degrees would allow partial melting in the lowest 40 kilometers or so of the lower mantle. This agrees well with seismic analysis of wave forms conducted in 1996 by Caltech Professor of Seismology, Donald Helmberger, and his former graduate student, Edward Garnero. Their research suggests that at the very lowest reaches of the mantle there is a partially molten layer, called the Ultra-Low-Velocity-Zone.

"We're getting into explaining how such a thin layer of molten rock could exist at great depth," says Ahrens. "This layer may be the origin layer that feeds mantle plumes, the volcanic edifices such as the Hawaiian island chain and Iceland. "We want to understand how Earth works."

Writer: 
Robert Tindol
Writer: 

Caltech Geologists Find New Evidence That Martian Meteorite Could Have Harbored Life

PASADENA—Geologists studying Martian meteorite ALH84001 have found new support for the possibility that the rock could once have harbored life.

Moreover, the conclusions of California Institute of Technology researchers Joseph L. Kirschvink and Altair T. Maine, and McGill University's Hojatollah Vali, also suggest that Mars had a substantial magnetic field early in its history.

Finally, the new results suggest that any life on the rock existing when it was ejected from Mars could have survived the trip to Earth.

In an article appearing in the March 13 issue of the journal Science, the researchers report that their findings have effectively resolved a controversy about the meteorite that has raged since evidence for Martian life was first presented in 1996. Even before this report, other scientists suggested that the carbonate globules containing the possible Martian fossils had formed at temperatures far too hot for life to survive. All objects found on the meteorite, then, would have to be inorganic.

However, based on magnetic evidence, Kirschvink and his colleagues say that the rock has certainly not been hotter than 350 degrees Celsius in the past four billion years—and probably has not been above the boiling point of water. At these low temperatures, bacterial organisms could conceivably survive.

"Our research doesn't directly address the presence of life," says Kirschvink. "But if our results had gone the other way, the high-temperature scenario would have been supported."

Kirschvink's team began their research on the meteorite by sawing a tiny sample in two and then determining the direction of the magnetic field held by each. This work required the use of an ultrasensitive superconducting magnetometer system, housed in a unique, nonmagnetic clean lab facility. The team's results showed that the sample in which the carbonate material was found had two magnetic directions—one on each side of the fractures.

The distinct magnetic directions are critical to the findings, because any weakly magnetized rock will reorient its magnetism to be aligned with the local field direction after it has been heated to high temperatures and cooled. If two such rock fragments are attached so that their magnetic directions are separate, but are then heated to a certain critical temperature, they will have a uniform direction.

The igneous rock (called pyroxenite) that makes up the bulk of the meteorite contains small inclusions of magnetic iron sulfide minerals that will entirely realign their field directions at about 350°C, and will partially align the field directions at much lower temperatures. Thus, the researchers have concluded that the rock has never been heated substantially since it last cooled some four billion years ago.

"We should have been able to detect even a brief heating event over 100 degrees Celsius," Kirschvink says. "And we didn't."

These results also imply that Mars must have had a magnetic field similar in strength to that of the present Earth when the rock last cooled. This is very important for the evolution of life, as the magnetic field will protect the early atmosphere of a planet from being sputtered away into space by the solar wind. Mars has since lost its strong magnetic field, and its atmosphere is nearly gone.

The fracture surfaces on the meteorite formed after it cooled, during an impact event on Mars that crushed the interior portion. The carbonate globules that contain putative evidence for life formed later on these fracture surfaces, and thus were never exposed to high temperatures, even during their ejection from the Martian surface nearly 15 million years ago, presumably from another large asteroid or comet impact.

A further conclusion one can reach from Kirschvink's work is that the inside of the meteorite never reached high temperatures when it entered Earth's atmosphere. This means, in effect, that any remaining life on the Martian meteorite could have survived the trip from Mars to Earth (which can take as little as a year, according to some dynamic studies), and could have ridden the meteorite down through the atmosphere by residing in the interior cracks of the rock and been deposited safely on Earth.

"An implication of our study is that you could get life from Mars to Earth periodically," Kirschvink says. "In fact, every major impact could do it." Kirschvink's suggested history of the rock is as follows:

The rock crystallized from an igneous melt some 4.5 billion years ago and spent about half a billion years on the primordial planet, being subjected to a series of impact-related metamorphic events, which included formation of the iron sulfide minerals.

After final cooling in the ancient Martian magnetic field about four billion years ago, the rock would have had a single magnetic field direction. Following this, another impact crushed parts of the meteorite without heating it, and caused some of the grains in the interior to rotate relative to each other. This led to a separation of their magnetic directions and produced a set of fracture cracks. Aqueous fluids later percolated through these cracks, perhaps providing a substrate for the growth of Martian bacteria. The rock then sat more or less undisturbed until a huge asteroid or comet smacked into Mars 15 million years ago. The rock wandered in space until about 13,000 years ago, when it fell on the Antarctic ice sheet.

Writer: 
Robert Tindol
Writer: 

Pages

Subscribe to RSS - research_news