Geophysicists Develop Model to Describe Huge Gravity Anomaly of Hudson Bay Region

PASADENA—While the gravity field of Earth is commonly thought of as constant, in reality there are small variations in the gravitational field as one moves around the surface of the planet.

These variations have typical magnitudes of about one–ten thousandth of the average gravitational attraction, which is approximately 9.8 meters per second per second. A global map of these variations shows large undulations at a variety of length scales. These undulations are known as gravity anomalies.

There are many such anomalies in Earth's gravity field, but one of the largest negative gravity anomalies (implying the attractions of gravity being a little less than average, or in other words, a mass deficit) centered over Hudson Bay, Canada. Using a new approach to analyzing planetary gravity fields, two geophysicists, Mark Simons at the California Institute of Technology and Bradford Hager at M.I.T., have shown that incomplete glacial rebound can account for a substantial portion of the Hudson Bay gravity anomaly.

With this new information, Simons and Hager were able to place new constraints on the variations in strength of the materials that constitute the outer layers of Earth's interior (the crust and mantle). Their work appears in the December 4 issue of the journal Nature.

About 18,000 years ago, Hudson Bay was at the center of a continental–sized glacier. Known as the Laurentide ice sheet, this glacier had a thickness of several kilometers. The weight of the ice bowed the surface of Earth down. The vast majority of the ice eventually melted at the end the Ice Age, leaving a depression in its wake.

While this depression has endured for thousands of years, it has been gradually recovering or "flattening itself out." The term "glacial rebound" refers to this exact behavior, whereby the land in formerly glaciated areas rises after the ice load has disappeared.

Evidence of this is seen in coastlines located near the center of the former ice sheet. These coastlines have already risen several hundred meters and will continue to rebound.

"The rate at which the area rebounds is a function of the viscosity of Earth," says Simons. "By looking at the rate of rebound going on, it's possible to learn about the planet's viscosity."

Simons says that geophysicists have known for some time about the Hudson Bay gravity anomaly, but have hitherto been uncertain how much of the gravity anomaly is a result of glacial rebound and how much is due to mantle convection or other processes.

The gravity anomaly is measured from both the ground and from space. Simons and Hager use a gravity data set developed by researchers at the Goddard Space Flight Center.

However, knowing how much of an anomaly exists at a certain site on Earth is not sufficient to determine the pliability of the materials beneath it. For this, Simons and his former M.I.T. colleague Hager have developed a new mathematical tool that looks at the spatial variations of the spectrum of the gravity field.

In many instances, this approach allows one to separate the signatures of geologic processes that occur at different locations on Earth. In particular, Simons and Hager were able to isolate the glacial rebound signature from signatures of other processes, such as manifestations of plate tectonics, that dominate that gravity field but are concentrated at other geographic locations.

Having an estimate of incomplete postglacial rebound allowed Simons and Hager to derive a model of how the viscosity of the mantle changes with depth. Simons and Hager propose one such model that explains both the gravity anomaly as well as the uplift rates estimated from the coastlines.

Their favored model suggests that underneath the oldest parts of continents (some of which are over 4 billion years old) the viscosity of the outer 400 kilometers of Earth is much stiffer than under the oceans. Therefore, these continental keels can resist the erosion by the convective flow that drives plate tectonics.

 

 

Writer: 
Robert Tindol
Writer: 

Caltech Question of the Month

Question of the Month Submitted by John Propst, Fullerton, California.

Answered by Ken Libbrecht, Caltech professor of physics.

Light travels at the speed of light, and is created traveling at light speed. When Einstein invented the theory of special relativity, he postulated that the speed of light was a constant.

If you carry your flashlight on a moving train, the photons travel out from it at a constant speed, whether you measure their speed from the ground or measure their speed from the train. You can't derive this fact from anything, so it becomes a fundamental law of physics.

We don't know why nature chooses to operate this way, but many measurements have shown that Einstein's postulate is very accurately followed.

 

Writer: 
RT

Caltech Biologists Pin Down Chain of Reactions That Turn On the Duplication of DNA

PASADENA—Caltech biologists have pinpointed the sequence of reactions that triggers the duplication of DNA in cells.

In companion papers appearing in recent issues of the journals Science and Cell, Assistant Professor of Biology Raymond Deshaies and his colleagues describe the chain of events that lead to the copying of chromosomes in a baker's yeast cell. Baker's yeast is often used as a model for human cells, so the research could have future implications for technology aimed at controlling cell reproduction, such as cancer treatments.

"We've provided a bird's-eye view of how a cell switches on the machinery that copies DNA," says Deshaies. "These principles can now be translated into a better understanding of how human cells proliferate."

The group's research keys primarily on how cells copy and segregate their chromosomes during the process of duplicating one cell into two. The new papers are concerned with how cells enter the DNA synthesis phase, during which the chromosomes are copied.

A question that cell biologists have sought for years to answer is that of which precise chemical events set off these reactions. The cell cycle is fundamental to the growth and division of all cells, but the process is somehow ramped down once the organism reaches maturity.

The paper appearing in Science describes how DNA synthesis is turned on. In the preceding stage (known as G1), proteins named G1 cyclins trigger the destruction of an inhibitor that keeps DNA synthesis from beginning.

This inhibitor sequesters an enzyme referred to as S-CDK (for DNA synthesis-promoting cyclin-dependent kinase), thereby blocking its action. Once the S-CDK is released, it switches on DNA synthesis. The S-CDK is present before the copying of DNA begins, but the DNA copying is not turned on until the S-CDK is freed of its inhibitor. The Deshaies group has shown that several phosphates are attached to the S-CDK inhibitor. These phosphates act as a molecular Velcro, sticking the inhibitor to yet another set of proteins called SCF.

The Cell paper essentially picks up on the description of the cell cycle at this point. The SCF, which acts like a molecular "hit man," promotes the attachment of another protein, ubiquitin. Ubiquitin in turn attracts the cellular garbage pail, proteasome. The inhibitor is disposed of in the proteasome, thereby freeing the S-CDK, which goes on to stimulate DNA duplication.

The process described above is quite complicated even in this condensed form, and actually is considerably more complicated in its technical details. But the detailed description that Deshaies and his colleagues have achieved is important fundamental science that could have technological implications in the future, Deshaies says.

"This traces the ignition of DNA synthesis down to a relatively small set of proteins," he says. "Any time you figure out how a part of the cell division machinery works, you can start thinking about devising new strategies to turn it on and off."

It is a precise turning on and off of DNA replication, many researchers think, that will someday be the key to better and more specific cancer-fighting drugs. Because a tumor is a group of cells that literally never stops the cell duplication cycle, a greater understanding of the cycle itself is almost certain to be a factor in further medical advances in cancer treatment.

"It could be five to 10 years, but this work could point the way to new cancer-fighting drugs," Deshaies says. "It is much easier to begin a rational approach to developing new treatments for cancer if you are armed with fundamental insights into how the cellular machinery works."

The other authors on the paper in the October 17 issue of Cell are R. M. Renny Feldman, a Caltech graduate student in biology; Craig C. Correll, a Caltech postdoctoral scholar in biology; and Kenneth B. Kaplan, a postdoctoral researcher at MIT.

The other authors of the Science paper from the October 17 issue are Rati Verma, a senior research fellow at Caltech; Gregory Reynard, a Caltech technician; and R. S. Annan, M. J. Huddleston, and S. A. Carr, all of the Research Mass Spectrometry Laboratory at SmithKline Beecham Pharmaceuticals in King of Prussia, Pennsylvania.

Writer: 
Robert Tindol
Writer: 

Caltech Scientists Devise First Neurochip

NEW ORLEANS—Caltech researchers have invented a "neurochip" that connects a network of living brain cells wired together to electrodes incorporated into a silicon chip.

The neurochips are being unveiled today at the annual meeting of the Society for Neurobiology, which is being held in New Orleans the week of October 25-30. According to Dr. Jerome Pine, one of the five coinventors of the neurochip, the technology is a major step forward for studying the development of neural networks.

The neurons used in the network are harvested from the hippocampus of rat embryos. Once the cells have been separated out by a protein-eating enzyme, each is individually inserted into a well in the silicon chip that is about half the diameter of a human hair. The cell is spherical in shape when it is inserted and is slightly smaller in diameter than the silicon chip well. When it is set in place and fed nutrients, it grows dendrites and an axon that spread out of the well.

In doing so, each neuron remains close to a single recording and stimulating electrode within the well, and also links up with other dendrites and axons attached to other neurons in other nearby wells.

According to Michael Maher, one of the coinventors, the neurochip currently has room for 16 neurons, which appear to develop normal connections with each other. "When the axons meet dendrites, they make an electrical connection," says Maher, who left Caltech in September to assume a postdoctoral appointment at UC San Diego. "So when one neuron fires, information is transmitted to the next neuron."

The neurochip network will be useful in studying the ways in which neurons maintain and change the strengths of their connections, Maher adds. "It's believed that memory in the brain is stored in the strength of these connections.

"This is pretty much a small brain connected to a computer, so it will be useful in finding out how a neural network develops and what its properties are. It will also be useful for studying chemical reactions at the synapses for weeks at a time. With conventional technology, you can record directly from at most a few neurons for at most a couple of hours."

There are two challenges facing the researchers as they attempt to improve the neurochips. One is providing the set of growth factors and nutrients to keep the cells alive for long periods of time. At present, two weeks is the limit.

The second challenge is finding a way to insert the cells in the silicon wells in a less time-consuming way. At present, the technique is quite labor intensive and requires a highly skilled technician with considerable patience and dexterity.

Other than the sheer effort involved, however, there is no reason that millions of cells could not be linked together at present, Maher says.

The other Caltech coinventors of the neurochip are Hanna Dvorak-Carbone, a graduate student in biology; Yu-Chong Tai, an associate professor of electrical engineering; and Tai's student, John Wright. The latter two are responsible for the silicon fabrication.

Writer: 
Robert Tindol
Writer: 

3-D Images of Martian Terrain To Be Shown To the Tune of Holst

PASADENA—Dramatic 3-D pictures of Mars will be shown during a performance of Gustav Holst's "Mars" at the first Caltech-Occidental Concert Band performance of the season.

The Concert Band is conducted by William Bing.

The concert begins at 8 p.m. Saturday, November 15, in Caltech's Beckman Auditorium. The concert is free and open to the public, with seating available on a "first come" basis.

The pictures are those returned to Earth from the Mars Pathfinder since the July 4 landing. Because pictures have been taken from slightly different angles, team scientists have been able to supply a number of high-quality 3-D pictures for public viewing.

The pictures are supplied by Rob Manning, who was Pathfinder chief engineer on the mission. Manning is also a trumpet player with the Caltech Jazz Band and a Caltech alumnus.

In addition to the performance of "Mars" from Holst's suite The Planets, the program will include music by George Gershwin and excerpts from the musical Ragtime.

Guest conductor Daniel Kessner will conduct one of his own compositions. This special work was recently commissioned by a consortium of colleges, including Caltech.

Writer: 
Robert Tindol
Writer: 

First Fully Automatic Design of a Protein Achieved by Caltech Scientists

PASADENA—Caltech scientists have found the Holy Grail of protein design. In fact, they've snatched it out of a giant pile of 1.9 x 1027 other chalices.

In the October 3 issue of the journal Science, Stephen L. Mayo, an Assistant Professor of Biology and a Howard Hughes Medical Institute Assistant Investigator, and chemistry graduate student Bassil I. Dahiyat report on their success in constructing a protein of their choice from scratch.

Researchers for some time have been able to create proteins in the lab by stringing together amino acids, but this has been a very hit-and-miss process because of the vast number of ways that the 20 amino acids found in nature can go together.

The number 1.9 x 1027, in fact, is the number of slightly different chains that 28 amino acids can form. And because slight differences in the geometry of protein chains are responsible for biological functions, the total control of formation is necessary to create new biological materials of choice.

By using a Silicon Graphics supercomputer to sort through all possible combinations for a selected protein, Mayo and Dahiyat have identified the target protein's best possible amino acid sequence. Then they have managed to take this knowledge and create the protein in the lab with existing technical processes.

This is a first, says Mayo. "Our goal has been to design brand-new proteins that do what we want them to do. This new result is the first major step in that direction. "Moreover, it shows that a computer program is the way to go in creating biological materials."

The technique they use, automated protein design, combines experimental synthesis of molecules with supercomputer-powered computational chemistry.

Proteins are the molecular building blocks of all living organisms. Composed of various combinations of the 20 amino acids, protein molecules can each comprise just a few hundred atoms, or literally millions of atoms. Most proteins involved in life processes have at least 100 amino acids, Mayo says.

Mayo and Dahiyat, who have been working on this research for five years, have developed a system that automatically determines the string of amino acids that will fold to most nearly duplicate the 3-D shape of a target structure. The system calculates a sequence's 3-D shape and evaluates how closely this matches the 3-D structure of the target protein.

One problem the researchers face is the sheer number of combinations needed to design a protein of choice. The protein that is the subject of this week's Science paper is a fragment of a fairly inconspicuous molecule involved in gene expression, and as such has only 28 amino acids. Even this small number takes a prodigious amount of computational power. A more desirable protein might involve 100 amino acids, which could make the staggering number of 10130 possible amino acid sequences.

Because this number is larger than the number of atoms in the universe, the researchers have had to find clever computational strategies to circumvent the impossible task of grinding out all the calculations.

In this case, the fastest way to the answer is by working backward. Starting with all the amino acid sequences possible for the protein, the computer program finds arrangements of amino acids that are a bad fit to the target structure. By repeatedly searching for, and eliminating, poorly matching amino acid combinations, the system rapidly converges on the best possible sequence for the target.

Subsequently, the simulation can be used to find other sequences that are nearly as good a fit as the best one.

This process has been honed by designing sequences for several different proteins, synthesizing them in the laboratory, and testing their actual properties.

With their innovative strategy, Mayo and Dahiyat are now reproducing proteins that are very similar to the target molecules. (The accompanying illustration shows how closely the protein they have formulated matches the target protein.)

But the goal is not just to create the proteins that already exist in nature. The researchers can actually improve on nature in certain circumstances. By making subtle changes in the amino acid sequence of a protein, for example, they are able to make a molecule more stable in harsh chemicals or hot environments (proteins tend to change irreversibly with a bit of heat, as anyone who has cooked an egg can attest).

"Our technology can actually change the proteins so that they behave a lot better," said Dahiyat, who recently finished his Caltech doctorate in chemistry and will now head Xencor, a start-up company established to commercialize the technology. The ability to create new proteins, and to adapt existing proteins to different environments and functions, could have profound implications for a number of emerging fields in biotechnology.

And, of course, it could help further the understanding of living processes.

"Paraphrasing Richard Feynman, if you can build it, you can understand it," says Mayo. "We think we can soon achieve a better understanding of proteins by going into a little dark room and building them to do exactly what we want them to do."

Writer: 
Robert Tindol
Writer: 

Possible Planet-Forming Disk imaged by Caltech Radio Astronomers

PASADENA—A giant disk of gas and dust over 10 times the size of our own solar system has been detected rotating around a young star in the constellation of Auriga. The star is both more massive and brighter than our sun, and appears to be a young version of another star called Beta Pictoris, where astronomers have long suspected the presence of planets.

The new discovery was made by radio astronomers at the California Institute of Technology using the millimeter-wave array at Caltech's Owens Valley Radio Observatory in central California. The results appear in the current issue of the journal Nature, and concern a relatively massive star known as MWC480, which is about 450 light-years from Earth.

How prevalent is planet formation around young stars? Past work had shown that stars similar to our own sun possess protoplanetary disks in their youth, disks we believe will form planets, perhaps as our own solar system did. However, little was known about the propensity of disks to form planets around stars that are more massive than our sun.

According to Vince Mannings, the paper's first author, the new results provide unprecedentedly clear evidence for the presence of a rotating disk of gas surrounding MWC480, and support earlier indications of rotating disks encircling some less massive and young sunlike stars. Not only is the gas around MWC480 clearly discernible at radio wavelengths, he says, but the orbital rotation of the entire disklike cloud is also unambiguously observed.

The presence of rotation suggests that, as for the disks around the young sunlike stars, the disk structure around MWC480 is long-lived. Indeed, this massive reservoir of orbiting material could last long enough to form new planets. "Families of planets, perhaps resembling our own solar system, are thought to originate in such disks," says Mannings. "Our sun, when very young, possibly had a disk similar to that around MWC480."

The star in the middle of the MWC480 disk resembles a much older star called Beta Pictoris, which is surrounded by a comparatively lightweight "debris disk," probably composed in part of dust-grain remnants from processes connected with an earlier phase of planet building. The new results imply that, in its youth, Beta Pictoris may have possessed a massive disk comparable to that now identified around MWC480. Beta Pictoris might have been, effectively, a "planetary construction site," says Mannings.

Other members of the research team are David Koerner, an astronomer at the Caltech/NASA Jet Propulsion Lab, and Anneila Sargent, who is executive director of Caltech's Owens Valley Radio Observatory.

Mannings says, "We believe that the amount of material in this disk is sufficient to produce a system of planets. We detect enough gas and dust to build planets with the same total mass as that of the nine planets in our own solar system. But we emphasize that the possibility of planet building within this particular disk is speculation only."

The radio image is sufficiently detailed to show that the large disk of gas and dust is tilted about 30 degrees from face-on. A tantalizing aspect of the image is that the rotation of the disk can be detected by measuring the velocities of the gas, most of which is in the form of molecular hydrogen. About 1 percent of the disk is dust grains, and just a trace amount of the material is carbon monoxide. The hydrogen is not detected directly, but the gas velocities can be probed using spectral-line radio waves emitted by the carbon monoxide. The Caltech measurements demonstrate that gas south of the star travels approximately toward us, and away from us when north of the star. From our vantage point, the disk is inferred to be rotating roughly from south to north.

For the first time, astronomers have identified clearly a young massive disk that could gradually evolve into a debris disk such as that surrounding the older star Beta Pictoris, perhaps building planets along the way. By studying stars like MWC480, say Mannings, Koerner and Sargent, we can hope to learn not only about the origins of the Beta Pictoris debris disk, but perhaps about the beginnings of our own solar system too. Astronomers have targeted nearby sunlike stars for searches for new planets, but this discovery shows that brighter stars should also be included.

Caltech Question of the Week: Will Future Mars Colonies Utilize Local Martian Rocks and Soil for Building Materials?

Question of the Month Submitted through e-mail by Dave Cooley, Costa Mesa, California, and answered by Albert Yen, Caltech grad student in planetary science; and Peter Goldreich, professor of astrophysics and planetary physics.

Future colonies on Mars will maximize the use of in situ resources in order to minimize the supplies that need to be transported from Earth. Fuel, oxygen, and building materials can all be obtained from the Martian atmosphere and regolith (or loose bedrock). In fact, a lander will be launched to Mars in 2001 which will demonstrate the ability to produce propellant from the gases in the atmosphere.

Building materials for a sustained human presence on Mars will be derived from the rocks and soil. One of the basic uses of the regolith would be to cover the habitat to provide a radiation shield against high-energy solar particles. In more advanced stages of settlement, bricks could be made from soil by heating under pressure. Mortar and cement produced on Mars could be based on the sulfur that is found at the surface (about 3% by weight).

Iron and other metals are on Mars, but steel might not be necessary for construction purposes. Our best evidence is that there is little seismic activity or plate tectonics on Mars today, so it might be possible to dispense with the steel structural materials and build entirely with stone. We know that the latter material is in abundance, of course, because we have been seeing the graphic evidence every day since Pathfinder landed on July 4.

Building with stone might simplify things even more than we know: though Mars has iron, it might be hard to mine it for some reason or other. And since a major reason you would want to use metal in building is for protection against seismic activity, future Mars colonists might be able to build as most of the residents of Earth do now — with commonly available stone reasonably near the building site.

Caltech Named Recipient of Federal Computational Science and Simulation Contract

WASHINGTON—The California Institute of Technology has been awarded a multimillion-dollar contract as part of a major new Department of Energy (DOE) effort to advance computational modeling.

The five-year contract to Caltech is one of five announced at a press conference in Washington today by DOE Secretary Federico Peña as part of the new 10-year, $250 million Academic Strategic Alliances Program (ASAP) of the Accelerated Strategic Computing Initiative (ASCI). The goal of the ASCI research program is to ensure the safety and reliability of America's nuclear stockpile without actual nuclear testing.

Academic institutions chosen to participate in the ASAP program will not be involved in research related to nuclear weapons. Rather, each university will pursue the simulation of an overarching application and will collaborate with the national laboratories in developing the computational science and infrastructure required for "virtual testing." In the process, scientists say, the program will also pave the way for significant advances in a host of peacetime applications requiring high-performance computing.

"President Clinton has challenged us to find a way to keep our nuclear stockpile safe, reliable and secure without nuclear testing," said Secretary Peña. "We're going to meet his challenge through computer simulations that verify the safety, reliability and performance of our nuclear weapons stockpile. I believe these Alliances will produce a flood of new technologies and ideas that will improve the quality of our lives and boost our economy. In fact — with the Academic Strategic Alliance Program in place — Americans will begin to see the results, as the acronym suggests, ASAP."

Caltech's role in the ASCI-ASAP initiative will be to model the response of materials to intense shock waves caused by explosions or impact at high velocity. According to faculty participants, the research will be of great benefit to a number of civilian applications where the behavior of materials exposed to shock waves is important.

Professor Steven Koonin, Caltech's vice president and provost and a professor of theoretical physics, commented that "this grant will enable Caltech researchers to advance the frontiers of large-scale computer simulation, to develop the algorithms and software that can exploit the extraordinarily capable hardware available.

"It is also important that our ASCI effort will educate students in broadly applicable simulation technology," Koonin added. "And by strengthening Caltech's ties with the national laboratories, the Institute will be contributing to the major national goal of science-based stockpile stewardship."

Dan Meiron, professor of applied mathematics at Caltech and principal investigator of the project, said that "the ASAP research program is unique in that by posing the challenge of developing the large-scale modeling and simulation capability required to address our particular overarching application, ASCI pushes multidisciplinary research to a new level."

Dr. Paul Messina, Caltech's assistant vice president for scientific computing and director of the Center for Advanced Computing Research (CACR), said that the ASCI initiative is an important step toward computational fidelity. "The exciting thing for me is the tremendous progress we'll make in computational science and engineering.

"This major project is unique in that it requires the integration of software components developed by researchers from a number of disciplines." Messina added that the ASCI initiative will lead quickly to advances in both computer hardware and software.

"The proposed research will involve all three of the state-of-the-art ASCI-class machines," Messina said, adding that these three computers are located at the Livermore, Sandia, and Los Alamos national labs. The first of the machines that was completed, which is located at Sandia, recently became the first computer to complete a trillion numerical computations in a second.

"Such computational power is vital for success of the ASCI initiative, Messina said. "The data sets generated by these computations are very large. A big part of the program is how to manage those computational resources optimally when you have thousands of processors, and how to support one overarching application when you have a large variety of length and time scales."

Caltech's proposal to the DOE outlines the construction of "a virtual shock physics facility in which the full three-dimensional response of a variety of target materials can be computed from a wide range of compressive, tensional, and sheer loadings including those loadings produced by detonation of energetic materials." Goals of the research will include improving the ability to compute experiments employing shock and detonation waves, computing the dynamic response of materials to the waves, and validating the computations against real-world experimental data.

These shock waves will be simulated as they pass through various phases (i.e., gas, liquid, and solid). The work could have applications for the synthesis of new materials or the interactions of explosions with structures. The work will also provide lab scientists in the federal Science-Based Stockpile Stewardship (SBSS) program a tool to simulate high-explosive detonation and ignition.

The ASCI-ASAP program at Caltech will involve the research groups of 18 Caltech professors from across the campus, including Tom Ahrens, a geophysicist; Joe Shepherd, an aeronautics engineer; Oscar Bruno, an applied mathematician; William Goddard, a chemist; Tom Tombrello, a physicist; and Mani Chandy and Peter Schröder, computer scientists. James Pool, deputy director of the CACR, will serve as executive director for the project.

Caltech is the lead university of the ASCI-ASAP contract to simulate the dynamic response of materials. Also participating with Caltech in this project are the Carnegie Institute of Washington, Brown University, the University of Illinois, Indiana University, and the University of Tennessee.

The other schools to receive ASCI-ASAP contracts are the University of Chicago, the University of Utah, Stanford University, and the University of Illinois at Urbana-Champagne.

Caltech Question of the Week: How does molten lava in the center of Earth replenish itself?

Question of the Month Submitted by Greg McNeil, Monrovia, California.

Answered by Thomas Ahrens, professor of geophysics, Caltech.

The replenishment of lava—the molten rock which flows out on the surface from the rocky silicate mantle of the Earth, back into the Earth—is a key process that appears to be unique to our planet (relative to the other silicate mantle planets with iron cores: Mars, Venus, and Mercury).

Basalt is the most prevalent lava type found on Earth. This is true also for the other above-mentioned planets. Basalt rock, which has a specific gravity 2.7 to 3.1 times denser than that of water, consists of two key mineral groups: plagioclase, which contains mostly calcium, sodium, potassium, aluminum, silicon, and oxygen; and pyroxene, which contains mostly calcium, magnesium, iron, silicon, and oxygen. These essential minerals react at pressures in the range of 10 to 15 thousand atmospheres to form a denser garnet-bearing rock, called eclogite. Eclogite has a density in the range of 3.4 to 3.5.

Thus, as a large pile of basalt accumulates on the Earth's surface, a process called subduction, or sinking, occurs when the basalt-ecologite transition begins. This causes sections of the crust of Earth containing basalt, or rock of similar composition with thicknesses of 30 km or greater, to transform to eclogite. Because their density is greater than the underlying mantle (3.3), the materials sink, or subduct, into Earth. The basalt-eclogite reaction requires elevated temperatures and some moisture.

If the temperature becomes too high, however, greater than 1,300 degrees centigrade, the dense mineral garnet, the major dense constituent of eclogite, will not be stable and subduction does not occur. This seems to be the case for Venus, which both has a hotter interior and lacks even a fraction of the percentage of water required to assist the basalt-eclogite reaction. This is also true for the sun's nearest planet neighbor, Mercury.

Interestingly, Mars' interior appears to be too cold for eclogite formation. The subduction process is a key element in the process of recycling rock back into the Earth. Earth scientists began to understand this process in the early 1960s, and it is now recognized as a major feature of the theory of plate tectonics. Moreover, exploration of the properties of the other silicate mantle planets with iron cores suggests that only the Earth has active plate tectonics.

Writer: 

Pages

Subscribe to RSS - research_news