Caltech Question of the Week: Why Does There Need To Be Water On a Planet or Moon To Have Life?

Question of the Month Submitted by Traci Salazar, 13, Alhambra, California, and answered by Richard Terrile, scientist, Jet Propulsion Laboratory, Caltech.

Water is a tremendously important ingredient in that it's a very good solvent and a very good medium for chemical reactions. It's also very common.

Water is nearly everywhere in the solar system, it's easy to make from two ingredients (hydrogen and oxygen) that are both very common throughout the solar system and the universe, and it can exist in some truly harsh environments.

In fact, the harsh Earth environments in which liquid water is found give us good reason to think that water could be associated with extraterrestrial life. Anywhere you look on Earth, no matter how inhospitable the environment seems, liquid water apparently always harbors life of some sort. This is true for subterranean rocks as well as superheated ocean vents. So since you can find life in such Earth environments, it makes sense that life could also exist in harsh environments elsewhere.

Thus, water is a great substance for conducting chemistry. And life is very sophisticated chemical activity.

Caltech Question of the Week: How Can Different Kinds of Vegetables Contain Different Vitamins When Grown in the Same Soil?

Question of the Month Submitted by Doris Bower, Arcadia, Calif., and answered by Dr. Elliot Meyerowitz, Professor of Biology, Caltech.

Only some vitamins are found in plants—others we must obtain from other sources such as bacteria and yeast, or from animal products. Vitamin B12 is an example of one vitamin that is not found in higher plants. Some plants are good sources of certain vitamins, however, or to be precise, they are good sources either of vitamins (such as vitamin B1 in rice husks or vitamin C in citrus fruits) or of provitamins, which are converted to vitamins in our bodies. An example is beta-carotene, also known as provitamin A, which is converted in our livers to vitamin A. Beta-carotene is abundant in carrots.

There are two answers to your question. The first is that each species of plant has its own methods of regulating the biosynthetic pathways by which they make provitamins, vitamins, and other substances. The plants themselves don't get vitamins from the soil, but they do get the raw materials they need to manufacture these vitamins for their own needs. These raw materials are phosphorus, potassium, nitrates, and about a dozen other elements in lesser quantities, such as iron and magnesium.

Also, the plants take in carbon dioxide from the air, and energy from sunlight. Each species of plant synthesizes different amounts of vitamins and provitamins from the available nutrients, both because of species differences and because plants regulate their biosynthetic pathways in response to the environment. Thus, even in the same environment, different species or varieties of plants will make different amounts of vitamins, and the same variety of plant will make different amounts of vitamins in different environments.

The second answer is that we eat different parts of different plants, and different parts will have different vitamin concentrations. Depending on the vegetable in question, we may eat the leaves, roots, or fruits. So you may get a good dose of vitamin A if you eat the root of the carrot plant. But if you develop a taste for fruits such as squash, you probably won't get nearly as much.

In the case of carrots and provitamin A, there is more to the story—plants synthesize beta-carotene as an aid to photosynthesis, and also as a pigment. As an aid to photosynthesis it is required mainly in leaves, where it is usually found in high concentrations. But beta-carotene is also a pigment; it is the substance that gives most of the orange color to carrots. Since consumers prefer bright orange carrots, plant breeders have deliberately bred carrots that contain high levels of beta-carotene.

Writer: 
Robert Tindol
Writer: 

Caltech Astronomers Crack the Puzzle of Cosmic Gamma-Ray Bursts

Additional Images can be obtained on the Caltech astronomy web site at http://astro.caltech.edu

PASADENA—A team of Caltech astronomers has pinpointed a gamma-ray burst several billion light-years away from the Milky Way. The team was following up on a discovery made by the Italian/Dutch satellite BeppoSAX.

The results demonstrate for the first time that at least some of the enigmatic gamma-ray bursts that have puzzled astronomers for decades are extragalactic in origin.

The team has announced the results in the International Astronomical Union Circular, which is the primary means by which astronomers alert their colleagues of transient phenomena. The results will be published in scientific journals at a later date.

Mark Metzger, a Caltech astronomy professor, said he was thrilled by the result. "When I finished analyzing the spectrum and saw features, I knew we had finally caught it. It was a stunning moment of revelation. Such events happen only a few times in the life of a scientist."

According to Dr. Shri Kulkarni, an astronomy professor at Caltech and another team member, gamma-ray bursts occur a couple of times a day. These brilliant flashes seem to appear from random directions in space and typically last a few seconds.

"After hunting clues to these bursts for so many years, we now know that the bursts are in fact incredibly energetic events," said Kulkarni.

For team member and astronomy professor George Djorgovski, "Gamma-ray bursts are one of the great mysteries of science. It is wonderful to contribute to its unraveling."

The bursts of high-energy radiation were first discovered by military satellites almost 30 years ago, but so far their origin has remained a mystery. New information came in recent years from NASA's Compton Gamma-Ray Observatory satellite, which has so far detected several thousand bursts. Nonetheless, the fundamental question of where the bursts came from remained unanswered.

Competing theories on gamma-ray bursts generally fall into two types: one, which supposes the bursts to originate from some as-yet unknown population of objects within our own Milky Way galaxy, and another, which proposes that the bursts originate in distant galaxies, several billion light-years away. If the latter (as was indirectly supported by the Compton Observatory's observations), then the bursts are among the most violent and brilliant events in the universe.

Progress in understanding the nature of ters had to make an extra effort to identify this counterpart quickly so that the Keck observations could be carried out when the object was bright. The discovery is a major step to help scientists understand the nature of the burst's origin. We now know that for a few seconds the burst was over a million times brighter than an entire galaxy. No other phenomena are known that produce this much energy in such a short time. Thus, while the observations have settled the question of whether the bursts come from cosmological distances, their physical mechanism remains shrouded in mystery.

The Caltech team, in addition to Metzger, Kulkarni, and Djorgovski, consists of professor Charles Steidel, postdoctoral scholars Steven Odewahn and Debra Shepherd, and graduate students Kurt Adelberger, Roy Gal, and Michael Pahre. The team also includes Dr. Dale Frail of the National Radio Astronomy Observatory in Socorro, New Mexico.

Writer: 
Robert Tindol
Writer: 

Question of the Week: Does the earth keep a constant distance from the sun? If not, will the earth get closer to the sun and become more warm?

Question: Does the earth keep a constant distance from the sun? If not, will the earth get closer to the sun and become more warm?

Submitted by Steven S. Showers Newbury Park, California

Answered by Andrew Ingersoll, professor of planetary science, Caltech.

Earth has an eccentric orbit, which means that it moves in a path that is slightly oval in shape. Contrary to what you'd expect, Earth gets closest to the sun every December, and farthest from the sun every June. We in the northerhe maximum eccentricity is about 5 percent and the minimum is near zero, when the orbit is nearly circular. This cycle can be calculated for millions of years, and we know that the glaciers also have cycles of about 100,000 years. The question is whether the glaciers are tied to changes in Earth's eccentricity.

So the bottom line is that Earth does get closer and farther, and it does affect the climate. But the mechanism is not all that clear. Averaged over a year, the distance from the Earth the Sun changes very little, even over billions of years (the Earth is 4.5 billion years old).

What is more important to climatic changes over the eons is the fact that the sun is getting brighter. Earth now gets about 20 to 25 percent more sunlight than it did four billion years ago.

Writer: 
Robert Tindol
Writer: 

Caltech Astronomer Obtains Data That Could Resolve the "Age Problem"

PASADENA — A California Institute of Technology astronomer has obtained data that could resolve the "age problem" of the universe, in which certain stars appear to be older than the universe itself.

Dr. Neill Reid, using information collected by the European Space Agency's Hipparcos satellite, has determined that a key distance measure used to compute the age of certain Milky Way stars is off by 10 to 15 percent. The new data leads to the conclusion that the oldest stars are actually 11 to 13 billion years old, rather than 16 to 18 billion years old, as had been thought.

The new results will be of great interest to cosmologists, Reid says, because estimates of the age of the universe, based on tracking back the current rate of expansion, suggest that the Big Bang occurred no more than about 13 billion years ago. Therefore, astronomers will no longer be confronted with the nettling discrepancy between the ages of stars and the age of the universe.

"This gives us an alternate way of estimating the age of the universe," says Reid. "The ideal situation would be to have the same answer, independently given by stellar modeling and cosmology."

Reid's method focuses on a type of star (known as subdwarfs) found in globular clusters, which are spherical accumulations of hundreds of thousands of individual stars. These have long been known to be among the earliest objects to form in the universe, since the stars are composed mainly of the primordial elements hydrogen and helium, and because the clusters themselves are distributed throughout a sphere 100,000 light-years in diameter, rather than confined, like the sun, within the flattened pancake of the galactic disk. Astronomers can determine quantitative ages for the clusters by measuring the luminosity (the intrinsic brightness) of the brightest sunlike stars in each cluster. Those measurements require that the distances to the clusters be known accurately.

Reid looked at some 30 stars within about 200 light-years of Earth. Using the Hipparcos satellite, he was able to obtain very accurate distances to these stars by the parallax method. Parallax is a common method for determining relatively nearby objects. Just as a tree 10 feet away will seem to shift its position against the distant background when an observer closes one eye and then the other, a nearby star will shift its position slightly if the observer waits six months for Earth to reach the opposite side of its orbit. And if the distance between the two observing sites (the baseline) is known very accurately, the observer can then compute the distance to the object by treating the object and the two observing sites as a giant triangle.

Reid chose the 30 stars for special study (out of the 100,000 for which Hipparcos obtained parallax data) because they, like the globular cluster stars, are composed primarily of hydrogen and helium. Thus, these stars also can be assumed to be very old, and may indeed themselves once have been members of globulars that were torn apart as they orbited the galaxy.

Once distances have been measured, these nearby stars act as standard candles whose brightness can be compared to similar stars in the globular clusters. While this is a well-known technique, older investigations were only able to use lower-accuracy, pre-Hipparcos parallaxes for 10 of the 30 stars.

Reid's conclusion is that the clusters are about 10 to 15 percent farther from Earth than previously thought. This, in turn, means that the stars in those clusters are actually about 20 percent brighter than previously thought, because luminosity falls off as distance increases. Brighter stars have shorter lifetimes, so this means that the clusters themselves must be younger than once assumed.

British astronomers Michael Feast and Robin Catchpole recently arrived at very similar conclusions, also based on new data from Hipparcos, but using a different, and less direct, line of argument. They used new measurements of a type of variable known as Cepheids to determine a revised distance to the Large Magellanic Cloud, a galaxy orbiting the Milky Way.

Feast and Catchpole used another type of variable star, the RR Lyrae variables, to bridge between the LMC and globular clusters. The fact that these two independent methods give the same answer makes that answer more believable, says Reid. "Most people previously believed that 14 billion years was the youngest age you could have for these stars," Reid says. "I think it's now accurate to say that the oldest you could make them is 14 billion years.

"No longer are we faced with the paradox of a universe younger than its stellar constituents," says Reid.

The work is set to appear in July in the Astrophysical Journal.

Writer: 
Robert Tindol
Writer: 

Caltech A Major Partner in National Program To Develop Advanced Computational Infrastructure

PASADENA, California — The California Institute of Technology (Caltech) will play three key roles in the National Partnership for Advanced Computational Infrastructure (NPACI) in the areas of management, resource deployment, and technology and application initiatives. The NPACI program is one of two partnerships each awarded approximately $170 million over five years in the National Science Foundation's Partnerships for Advanced Computational Infrastructure (PACI) slated to begin October 1, 1997.

The NPACI partnership, led by the University of California, San Diego (UCSD), includes Caltech and 36 other leaders in high-performance computing. In NPACI, leading-edge scientists and engineers from 18 states will develop algorithms, libraries, system software, and tools in order to create a national metacomputing infrastructure of the future--one that will provide both teraflops and petabyte capabilility. (A petabyte is equal to one billion megabytes; one teraflops is one trillion computations per second.) A fully supported, teraflops/petabyte-scale metacomputing environment will enable quantitative and qualitative advances in a wide range of scientific disciplines, including astronomy, biochemistry, biology, chemistry, engineering, fluid dynamics, materials science, neuroscience, social science and behavioral science.

Dr. Paul Messina, Caltech's assistant vice president for scientific computing and director of Caltech's Center for Advanced Computing Research (CACR), has been named chief architect of NPACI. He will be responsible for the overall architecture of the project, including interaction mechanisms among the partners; deployment of infrastructure; and balance among partnership hardware and software systems, thrust area projects, and other user needs.

"Caltech is pleased to be a part of this historic initiative," said Thomas Everhart, president of Caltech. "Paul Messina and the CACR have made it possible for the Institute to contribute to the further development of our national computing infrastructure."

While Caltech will contribute to a variety of software development projects as a research partner, as a resource partner, the Institute will provide national access to some of the hardware used to facilitate the development of software infrastructure to link computers, data servers, and archival storage systems to enable easier use of the aggregate computing power. The NPACI award builds on the longstanding partnership between Caltech and NASA's Jet Propulsion Laboratory (JPL) in the area of high-performance computing. With both NSF and NASA support, Caltech will acquire a succession of parallel computers from Hewlett-Packard's Convex Division, including serial #1 of the HP/Convex SPP 3000, to be installed in 1999. Caltech's first machine, already housed on campus, will be a 256-processor Convex SPP 2000, with a peak speed of 184 Gflops, 64 GBytes of memory, and 1 TByte of online disk. (A 60-terabyte HPSS-based archival storage system will also be available to NPACI.)

"NPACI provides the opportunity to build a computational infrastructure that will enable scientific breakthroughs and new modes of computing," stated Dr. Messina. "As chief architect for the partnership, I look forward to the synergistic coupling of so many excellent scientists dedicated to creating an infrastructure that will profoundly impact future scientific endeavor by providing unprecedented new computational capabilities."

The NPACI metacomputing environment will consist of geographically separated, heterogeneous, high-performance computers, data servers, archival storage, and visualization systems linked together by high-speed networks so that their aggregate power may be applied to research problems that cannot be studied any other way. This environment will be extended to support "data-intensive computing." To that end, infrastructure will be developed to enable--for the first time--the analysis of multiple terabyte-sized data collections.

Installation of the Caltech HP Exemplar was made possible by a collaborative agreement between Caltech and Hewlett-Packard that will result in new technology, tools, and libraries to support the type of multidisciplinary research and metacomputing environment exemplified by the new NPACI award.

By providing access to new and emerging technologies for computing, networking, data storage, data acquisition, and archival functions as part of NPACI, Caltech's CACR will continue to pursue its application-driven approach to use and integrate new hardware and software technologies. In addition to the Caltech resources, UC Berkeley and UCSD will also provide alternate architectures initially to NPACI. These alternate architectures will be to explore alternate performance or price/performance regimes, facilitate porting of software, assure competitive pricing from vendors, and provide a ready migration path should one of the alternatives become preeminent.

As the infrastructure enhancements needed to create the computing environment of the future include far more than hardware and network connections, Caltech is also contributing to NPACI through its participation in "thrust area teams," which will focus on key technologies and applications required for metacomputing, such as data-intensive computing; adaptable, scalable tools/environments; and interaction environments. Many aspects of the NPACI technology thrusts build upon projects initiated and led by Caltech's CACR, including the CASA Gigabit Testbed and the Scalable I/O Initiative.

The Digital Sky Project, led by Caltech professor Thomas A. Prince, CACR associate director, will be a primary data-intensive computing effort in NPACI. This innovative project will make early use of the NPACI resources. Large-area digital sky surveys are a recent and exciting development in astronomy. The combination of the NPACI Tflops/Tbyte computational resources with the recent large-area sky surveys supported by NSF and NASA at optical, infrared, and radio wavelengths will provide unparalleled new capability for astronomical research. The Digital Sky Project, anticipated to be used by the entire astronomical community of over 5,000 scientists and students, will provide a qualitatively different computational infrastructure for astronomical research.

Dr. Carl Kukkonen, manager of supercomputing at JPL, reports that NASA is excited about the opportunity to work with NPACI at the forefront of computing technologies. "We will use these computing resources to tackle the most challenging issues in spacecraft design and space science data analysis," said Kukkonen.

Rounding out Caltech participation in NPACI are efforts by Caltech faculty in three different NPACI applications thrusts: engineering, neuroscience, and molecular science. Infrastructure development, more specifically, in the areas of materials science, brain/neuron modeling, and molecular science will be led by Caltech professors William Goddard, Ferkel Professor of Chemistry and Applied Physics; James Bower, Associate Professor of Biology; and Aron Kuppermann, Professor of Chemical Physics, respectively.

More details on NPACI can be obtained at http://cacse.ucsd.edu/npaci.html. See http://hpcc997.external.hp.com/pressrel/nov96/18nov96h.htm for a press release on the Caltech-HP/Convex collaboration.

Writer: 
Robert Tindol
Writer: 

Question of the Week: Who Invented the Equal Sign, and Why?

Submitted by Pat Orr, Altadena, California, and answered by Tom Apostol, Professor of Mathematics Emeritus, Caltech.

According to the VNR Concise Encyclopedia of Mathematics, the equal sign was invented by Robert Recorde, the Royal Court Physician for England's King Edward VI and Queen Mary. Recorde, who lived from 1510 to 1558, was the most influential English mathematician of his day and, among other things, introduced algebra to his countrymen. He died in prison, although the record does not state why he was incarcerated. Hopefully, it had nothing to do with his mathematical activities.

Recorde first proposed the equal sign in a 1557 book named The Whetstone of Witte, in which he says, "And to avoide the tediouse repetition of these woordes: is equalle to: I will sette as I doe often in woorke use, a paire of paralleles, of Gemowe (or twin) lines of one lengthe, thus: = = = = = =, bicause noe 2 thynges can be moare equalle."

In addition to The Whetstone of Witte, Recorde also wrote a book on arithmetic called The Grounde of Artes (c. 1542), a book on popular astronomy called The Castle of Knowledge (1551), and a book on Euclidian geometry called The Pathewaie to Knowledge (1551). The Encyclopædia Britannica notes that Whetstone was his most influential work.

As for the reason he invented the equal sign, I think his own words say it best, if you can decipher the archaic language and odd spellings. Equal signs let us avoid tedious repetition and, as such, allow a shorthand symbol to show how unknown quantities relate to known quantities.

Writer: 
Robert Tindol
Writer: 

Caltech Scientists Invent Polymer For Detecting Blood Glucose

PASADENA— Scientists have designed a polymer that could vastly improve the way diabetics measure their blood glucose levels. The polymer is described in the current issue of Nature Biotechnology.

According to Dr. Frances Arnold, a professor of chemical engineering at the California Institute of Technology, the polymer is superior to the current enzyme-based glucose detectors because it is not of biological origin. The polymer will be easier to make and thus lead to cheaper and more reliable glucose sensors.

"This has the potential to help a lot of people, and that's what I find exciting," says Arnold. "A 1993 clinical study showed that if you monitor glucose carefully, the serious complications of diabetes such as gangrene and retinal damage could be reduced by 65 percent."

Arnold believes that her invention will improve the monitoring of glucose, especially for patients in developing countries of the world. Depending on the mechanisms devised for patient use, the polymer will likely be easy and cheap to manufacture and use, which could simplify and widen the practice of frequent testing of blood glucose throughout the day—a practice that many experts say is important to minimizing the complications of diabetes.

Also, the polymer will be more chemically stable and possibly less immunogenic in the human body than the enzymes currently available for glucose monitoring. This could make it more reliable for use in biosensors that remain in the body for extended periods.

At the heart of the polymer is a copper metal complex. The metal is held by a chelating agent that occupies three out of five or six possible "slots" for binding. The other two or three slots, however, can be used to indirectly measure glucose by examining the manner in which hydroxyl groups from the glucose bind.

"The net reaction with glucose is the release of a proton," Arnold explains. Ultimately, the polymer works with a pH meter because hydrogen ions are released from the polymer complex. More hydrogen ions means a more acidic solution (a lower pH), and an acidic response corresponds to high glucose levels in the blood.

And because the substance is nonbiological, it can bypass the blood's normal buffering capacity in order to work at optimal pH levels. This would allow for a simple and straightforward interaction with the blood that, coupled with the inexpensiveness of the materials, would allow for significant reductions in cost to the patient.

The cost reduction would be especially important in the Third World, where diabetes is on the rise.

Also involved in the research are Guohua Chen, a postdoctoral fellow at Caltech, and Vidyasankar Sundaresan, a Caltech graduate student. Former postdoctoral researchers on the project are Zhibin Guan and Chao-Tsen Chen.

Writer: 
Robert Tindol
Writer: 
Exclude from News Hub: 
No

Caltech Scientists Find Evidence For Massive Ice Age When Earth Was 2.4 billion Years Old

PASADENA— Those who think the winter of '97 was rough should be relieved that they weren't around 2.2 billion years ago. Scientists have discovered evidence for an ice age at the time that was severe enough to partially freeze over the equator. In today's new issue of Nature, California Institute of Technology geologists Dave Evans and Joseph Kirschvink report evidence that glaciers came within a few degrees of the equator's latitude when the planet was about 2.4 billion years old. They base their conclusion on glacial deposits discovered in present-day South Africa, plus magnetic evidence showing where South Africa's crustal plate was located at that time.

Based on that evidence, the Caltech researchers think they have documented the extremely rare "Snowball Earth" phenomenon, in which virtually the entire planet may have been covered in ice and snow. According to Kirschvink, who originally proposed the Snowball Earth theory, there have probably been only two episodes in which glaciation of the planet reached such an extent — one less than a billion years ago during the Neoproterozoic Era, and the one that has now been discovered from the Paleoproterozoic Era 2.2 billion years ago.

"The young Earth didn't catch a cold very often," says Evans, a graduate student in Kirschvink's lab. "But when it did, it seems to have been pretty severe."

The researchers collected their data by drilling rock specimens in South Africa and carefully recording the magnetic directions of the samples. From this information, the researchers then computed the direction and distance to the ancient north and south poles.

The conclusion was that the place in which they were drilling was 11 degrees (plus or minus five degrees) from the equator when Earth was 2.4 billion years old. Plate tectonic motions since that time have caused South Africa to drift all over the planet, to its current position at about 30 degrees south latitude. Additional tests showed that the samples were from glacial deposits, and further, were characteristic of a widespread region.

Kirschvink and Evans say that the preliminary implications are that Earth can somehow manage to pull itself out of a period of severe glaciation. Because ice and snow tend to reflect sunlight much better than land and water, Earth would normally be expect to have a hard time reheating itself in order to leave an ice age. Thus, one would expect a Snowball Earth to remain forever.

Yet, the planet obviously recovered both times from the severe glaciation. "We think it is likely that the intricacies of global climate feedback are not yet completely understood, especially concerning major departures from today's climate," says Evans. "If the Snowball Earth model is correct, then our planet has a remarkable resilience to abrupt shifts in climate.

"Somehow, the planet recovered from these ice ages, probably as a result of increased carbon dioxide — the main greenhouse gas."

Evans says that an asteroid or comet impact could have caused carbon dioxide to pour into the atmosphere, allowing Earth to trap solar energy and reheat itself. But evidence of an impact during this age, such as a remote crater, is lacking.

Large volcanic outpourings could also have released a lot of carbon dioxide, as well as other factors, such as sedimentary processes and biological factors.

At any rate, the evidence for the robustness of the planet and the life that inhabits it is encouraging, the researchers say. Not only did Earth pull itself out of both periods of severe glaciation, but many of the single-celled organisms that existed at the time managed to persevere.

Writer: 
Robert Tindol
Writer: 

State-of-the-Art Seismic Network Gets First Trial-by-Fire During This Morning's 5.4-magnitude Earthquake

PASADENA—Los Angeles reporters and camera crews responding to a 5.4-magnitude earthquake this morning got their first look at the new Caltech/USGS earthquake monitoring system.

The look was not only new but almost instantaneous. Within 15 minutes of the earthquake, Caltech seismologists had already printed out a full-color poster-sized map of the region to show on live TV, and had already posted the contour map on the Internet. Moreover, they were able to determine the magnitude of the event within five minutes — a tremendous improvement over the time it once took to confirm data.

"Today, we had a much better picture of how the ground responded to the earthquake than we've ever had in the past," said Dr. Lucile Jones, a U.S. Geological Survey seismologist who is stationed at Caltech. "This was the largest earthquake we've had since September of 1995, and was the first time we've been able to use the new instruments that we're still installing."

The new instruments are made possible by the TriNet Project, a $20.75-million initiative for providing a state-of-the-art monitoring network for Southern California. A scientific collaboration between Caltech, the USGS and the California Department of Conservation's Division of Mines and Geology, the project is designed to provide real-time earthquake monitoring and, ultimately, to lead to early-warning technology to save lives and mitigate urban damage after earthquakes occur.

"The idea of Trinet was to get quick locations and magnitudes out, to get quick estimates of the distribution of the ground shaking, and a prototype early-warning system," Caltech seismic analyst Egill Hauksson said an hour after this morning's earthquake. "The first two of those things are already in progress. We are in the midst of deploying hardware in the field and developing data-processing software." TriNet was announced earlier this year when funding was approved by the Federal Emergency Management Agency. The new system relies heavily on recent advances in computer communications technology and data processing.

The map printed out this morning (the ShakeMap) is just a preview of future TriNet products. Caltech seismologist Kate Hutton gave a number of TV interviews in front of the map this morning. The map was noteworthy not only for the speed in which it was produced, but also for the manner in which information about the earthquake was relayed.

Instead of charting magnitudes, the map was drawn in such a way that the velocity the ground moved was shown with contour lines. The most rapid movement in the 5.4 earthquake this morning was about two inches per second at the epicenter, and this was clearly indicated in the innermost circle on the color map. Moving outward from the epicenter of the earthquake, the velocity of ground movement decreased, and this was indicated by lower velocity numbers in the outer circles.

The maps can also be printed out to show ground accelerations, which are especially useful for ascertaining likely damage in an earthquake area, Hutton said.

Later, the TriNet will result in prototype early warnings to distant locations in the Los Angeles area that potentially damaging ground shaking is on the way. After an earthquake occurs, the seismic waves travel a few kilometers per second, while communication transmission can travel the speed of light. Thus, Los Angeles could eventually receive warning of a major earthquake at the San Andreas fault some 30 to 60 seconds before the heavy shaking actually began in the city.

The total cost of the project is $20.75 million. FEMA will provide $12.75 million, the USGS has provided $4.0 million, and the balance is to be matched by Caltech ($2.5 million) and the DOC ($1.75 million). Several private sector partners, including GTE and Pacific Bell, are assisting Caltech with matching funds for its portion of the TriNet balance.

The TriNet Project is being built upon existing networks and collaborations. Southern California's first digital network began with the installation of seismographs known as TERRAscope, and was made possible by a grant from the L.K. Whittier Foundation and the ARCO Foundation. Also, Pacific Bell through its CalREN Program has provided new frame-relay digital communications technology.

A major step in the modernization came in response to the Northridge earthquake, when the USGS received $4.0 million from funds appropriated by Congress to the National Earthquake Hazard Reduction Program. This money was the first step in the TriNet project and the USGS has been working with Caltech for the last 27 months to begin design and implementation. Significant progress has already been made and new instrumentation is now operational:

o Thirty state-of-the-art digital seismic stations are operating with continuous communication to Caltech/USGS

o Twenty strong-motion sites installed near critical structures

o Two high-rise buildings have been instrumented

o Alarming and processing software have been designed and implemented

o Automated maps of contoured ground shaking are available on the Web within a few minutes after felt and damaging earthquakes (http://www-socal.wr.usgs.gov).

DOC's strong motion network in Southern California is a key component of the TriNet Project, contributing 400 of the network's 650 sensing stations. DOC's network expansion and upgrade through the funding of this project will allow much better information about strong shaking than was possible for the Northridge earthquake. This data is the key to improving building codes for more earthquake-resistant structures.

Writer: 
Robert Tindol
Writer: 

Pages

Subscribe to RSS - research_news