Caltech A Major Partner in National Program To Develop Advanced Computational Infrastructure

PASADENA, California — The California Institute of Technology (Caltech) will play three key roles in the National Partnership for Advanced Computational Infrastructure (NPACI) in the areas of management, resource deployment, and technology and application initiatives. The NPACI program is one of two partnerships each awarded approximately $170 million over five years in the National Science Foundation's Partnerships for Advanced Computational Infrastructure (PACI) slated to begin October 1, 1997.

The NPACI partnership, led by the University of California, San Diego (UCSD), includes Caltech and 36 other leaders in high-performance computing. In NPACI, leading-edge scientists and engineers from 18 states will develop algorithms, libraries, system software, and tools in order to create a national metacomputing infrastructure of the future--one that will provide both teraflops and petabyte capabilility. (A petabyte is equal to one billion megabytes; one teraflops is one trillion computations per second.) A fully supported, teraflops/petabyte-scale metacomputing environment will enable quantitative and qualitative advances in a wide range of scientific disciplines, including astronomy, biochemistry, biology, chemistry, engineering, fluid dynamics, materials science, neuroscience, social science and behavioral science.

Dr. Paul Messina, Caltech's assistant vice president for scientific computing and director of Caltech's Center for Advanced Computing Research (CACR), has been named chief architect of NPACI. He will be responsible for the overall architecture of the project, including interaction mechanisms among the partners; deployment of infrastructure; and balance among partnership hardware and software systems, thrust area projects, and other user needs.

"Caltech is pleased to be a part of this historic initiative," said Thomas Everhart, president of Caltech. "Paul Messina and the CACR have made it possible for the Institute to contribute to the further development of our national computing infrastructure."

While Caltech will contribute to a variety of software development projects as a research partner, as a resource partner, the Institute will provide national access to some of the hardware used to facilitate the development of software infrastructure to link computers, data servers, and archival storage systems to enable easier use of the aggregate computing power. The NPACI award builds on the longstanding partnership between Caltech and NASA's Jet Propulsion Laboratory (JPL) in the area of high-performance computing. With both NSF and NASA support, Caltech will acquire a succession of parallel computers from Hewlett-Packard's Convex Division, including serial #1 of the HP/Convex SPP 3000, to be installed in 1999. Caltech's first machine, already housed on campus, will be a 256-processor Convex SPP 2000, with a peak speed of 184 Gflops, 64 GBytes of memory, and 1 TByte of online disk. (A 60-terabyte HPSS-based archival storage system will also be available to NPACI.)

"NPACI provides the opportunity to build a computational infrastructure that will enable scientific breakthroughs and new modes of computing," stated Dr. Messina. "As chief architect for the partnership, I look forward to the synergistic coupling of so many excellent scientists dedicated to creating an infrastructure that will profoundly impact future scientific endeavor by providing unprecedented new computational capabilities."

The NPACI metacomputing environment will consist of geographically separated, heterogeneous, high-performance computers, data servers, archival storage, and visualization systems linked together by high-speed networks so that their aggregate power may be applied to research problems that cannot be studied any other way. This environment will be extended to support "data-intensive computing." To that end, infrastructure will be developed to enable--for the first time--the analysis of multiple terabyte-sized data collections.

Installation of the Caltech HP Exemplar was made possible by a collaborative agreement between Caltech and Hewlett-Packard that will result in new technology, tools, and libraries to support the type of multidisciplinary research and metacomputing environment exemplified by the new NPACI award.

By providing access to new and emerging technologies for computing, networking, data storage, data acquisition, and archival functions as part of NPACI, Caltech's CACR will continue to pursue its application-driven approach to use and integrate new hardware and software technologies. In addition to the Caltech resources, UC Berkeley and UCSD will also provide alternate architectures initially to NPACI. These alternate architectures will be to explore alternate performance or price/performance regimes, facilitate porting of software, assure competitive pricing from vendors, and provide a ready migration path should one of the alternatives become preeminent.

As the infrastructure enhancements needed to create the computing environment of the future include far more than hardware and network connections, Caltech is also contributing to NPACI through its participation in "thrust area teams," which will focus on key technologies and applications required for metacomputing, such as data-intensive computing; adaptable, scalable tools/environments; and interaction environments. Many aspects of the NPACI technology thrusts build upon projects initiated and led by Caltech's CACR, including the CASA Gigabit Testbed and the Scalable I/O Initiative.

The Digital Sky Project, led by Caltech professor Thomas A. Prince, CACR associate director, will be a primary data-intensive computing effort in NPACI. This innovative project will make early use of the NPACI resources. Large-area digital sky surveys are a recent and exciting development in astronomy. The combination of the NPACI Tflops/Tbyte computational resources with the recent large-area sky surveys supported by NSF and NASA at optical, infrared, and radio wavelengths will provide unparalleled new capability for astronomical research. The Digital Sky Project, anticipated to be used by the entire astronomical community of over 5,000 scientists and students, will provide a qualitatively different computational infrastructure for astronomical research.

Dr. Carl Kukkonen, manager of supercomputing at JPL, reports that NASA is excited about the opportunity to work with NPACI at the forefront of computing technologies. "We will use these computing resources to tackle the most challenging issues in spacecraft design and space science data analysis," said Kukkonen.

Rounding out Caltech participation in NPACI are efforts by Caltech faculty in three different NPACI applications thrusts: engineering, neuroscience, and molecular science. Infrastructure development, more specifically, in the areas of materials science, brain/neuron modeling, and molecular science will be led by Caltech professors William Goddard, Ferkel Professor of Chemistry and Applied Physics; James Bower, Associate Professor of Biology; and Aron Kuppermann, Professor of Chemical Physics, respectively.

More details on NPACI can be obtained at http://cacse.ucsd.edu/npaci.html. See http://hpcc997.external.hp.com/pressrel/nov96/18nov96h.htm for a press release on the Caltech-HP/Convex collaboration.

Writer: 
Robert Tindol
Writer: 

Question of the Week: Who Invented the Equal Sign, and Why?

Submitted by Pat Orr, Altadena, California, and answered by Tom Apostol, Professor of Mathematics Emeritus, Caltech.

According to the VNR Concise Encyclopedia of Mathematics, the equal sign was invented by Robert Recorde, the Royal Court Physician for England's King Edward VI and Queen Mary. Recorde, who lived from 1510 to 1558, was the most influential English mathematician of his day and, among other things, introduced algebra to his countrymen. He died in prison, although the record does not state why he was incarcerated. Hopefully, it had nothing to do with his mathematical activities.

Recorde first proposed the equal sign in a 1557 book named The Whetstone of Witte, in which he says, "And to avoide the tediouse repetition of these woordes: is equalle to: I will sette as I doe often in woorke use, a paire of paralleles, of Gemowe (or twin) lines of one lengthe, thus: = = = = = =, bicause noe 2 thynges can be moare equalle."

In addition to The Whetstone of Witte, Recorde also wrote a book on arithmetic called The Grounde of Artes (c. 1542), a book on popular astronomy called The Castle of Knowledge (1551), and a book on Euclidian geometry called The Pathewaie to Knowledge (1551). The Encyclopædia Britannica notes that Whetstone was his most influential work.

As for the reason he invented the equal sign, I think his own words say it best, if you can decipher the archaic language and odd spellings. Equal signs let us avoid tedious repetition and, as such, allow a shorthand symbol to show how unknown quantities relate to known quantities.

Writer: 
Robert Tindol
Writer: 

Caltech Scientists Invent Polymer For Detecting Blood Glucose

PASADENA— Scientists have designed a polymer that could vastly improve the way diabetics measure their blood glucose levels. The polymer is described in the current issue of Nature Biotechnology.

According to Dr. Frances Arnold, a professor of chemical engineering at the California Institute of Technology, the polymer is superior to the current enzyme-based glucose detectors because it is not of biological origin. The polymer will be easier to make and thus lead to cheaper and more reliable glucose sensors.

"This has the potential to help a lot of people, and that's what I find exciting," says Arnold. "A 1993 clinical study showed that if you monitor glucose carefully, the serious complications of diabetes such as gangrene and retinal damage could be reduced by 65 percent."

Arnold believes that her invention will improve the monitoring of glucose, especially for patients in developing countries of the world. Depending on the mechanisms devised for patient use, the polymer will likely be easy and cheap to manufacture and use, which could simplify and widen the practice of frequent testing of blood glucose throughout the day—a practice that many experts say is important to minimizing the complications of diabetes.

Also, the polymer will be more chemically stable and possibly less immunogenic in the human body than the enzymes currently available for glucose monitoring. This could make it more reliable for use in biosensors that remain in the body for extended periods.

At the heart of the polymer is a copper metal complex. The metal is held by a chelating agent that occupies three out of five or six possible "slots" for binding. The other two or three slots, however, can be used to indirectly measure glucose by examining the manner in which hydroxyl groups from the glucose bind.

"The net reaction with glucose is the release of a proton," Arnold explains. Ultimately, the polymer works with a pH meter because hydrogen ions are released from the polymer complex. More hydrogen ions means a more acidic solution (a lower pH), and an acidic response corresponds to high glucose levels in the blood.

And because the substance is nonbiological, it can bypass the blood's normal buffering capacity in order to work at optimal pH levels. This would allow for a simple and straightforward interaction with the blood that, coupled with the inexpensiveness of the materials, would allow for significant reductions in cost to the patient.

The cost reduction would be especially important in the Third World, where diabetes is on the rise.

Also involved in the research are Guohua Chen, a postdoctoral fellow at Caltech, and Vidyasankar Sundaresan, a Caltech graduate student. Former postdoctoral researchers on the project are Zhibin Guan and Chao-Tsen Chen.

Writer: 
Robert Tindol
Writer: 
Exclude from News Hub: 
No

Caltech Scientists Find Evidence For Massive Ice Age When Earth Was 2.4 billion Years Old

PASADENA— Those who think the winter of '97 was rough should be relieved that they weren't around 2.2 billion years ago. Scientists have discovered evidence for an ice age at the time that was severe enough to partially freeze over the equator. In today's new issue of Nature, California Institute of Technology geologists Dave Evans and Joseph Kirschvink report evidence that glaciers came within a few degrees of the equator's latitude when the planet was about 2.4 billion years old. They base their conclusion on glacial deposits discovered in present-day South Africa, plus magnetic evidence showing where South Africa's crustal plate was located at that time.

Based on that evidence, the Caltech researchers think they have documented the extremely rare "Snowball Earth" phenomenon, in which virtually the entire planet may have been covered in ice and snow. According to Kirschvink, who originally proposed the Snowball Earth theory, there have probably been only two episodes in which glaciation of the planet reached such an extent — one less than a billion years ago during the Neoproterozoic Era, and the one that has now been discovered from the Paleoproterozoic Era 2.2 billion years ago.

"The young Earth didn't catch a cold very often," says Evans, a graduate student in Kirschvink's lab. "But when it did, it seems to have been pretty severe."

The researchers collected their data by drilling rock specimens in South Africa and carefully recording the magnetic directions of the samples. From this information, the researchers then computed the direction and distance to the ancient north and south poles.

The conclusion was that the place in which they were drilling was 11 degrees (plus or minus five degrees) from the equator when Earth was 2.4 billion years old. Plate tectonic motions since that time have caused South Africa to drift all over the planet, to its current position at about 30 degrees south latitude. Additional tests showed that the samples were from glacial deposits, and further, were characteristic of a widespread region.

Kirschvink and Evans say that the preliminary implications are that Earth can somehow manage to pull itself out of a period of severe glaciation. Because ice and snow tend to reflect sunlight much better than land and water, Earth would normally be expect to have a hard time reheating itself in order to leave an ice age. Thus, one would expect a Snowball Earth to remain forever.

Yet, the planet obviously recovered both times from the severe glaciation. "We think it is likely that the intricacies of global climate feedback are not yet completely understood, especially concerning major departures from today's climate," says Evans. "If the Snowball Earth model is correct, then our planet has a remarkable resilience to abrupt shifts in climate.

"Somehow, the planet recovered from these ice ages, probably as a result of increased carbon dioxide — the main greenhouse gas."

Evans says that an asteroid or comet impact could have caused carbon dioxide to pour into the atmosphere, allowing Earth to trap solar energy and reheat itself. But evidence of an impact during this age, such as a remote crater, is lacking.

Large volcanic outpourings could also have released a lot of carbon dioxide, as well as other factors, such as sedimentary processes and biological factors.

At any rate, the evidence for the robustness of the planet and the life that inhabits it is encouraging, the researchers say. Not only did Earth pull itself out of both periods of severe glaciation, but many of the single-celled organisms that existed at the time managed to persevere.

Writer: 
Robert Tindol
Writer: 

State-of-the-Art Seismic Network Gets First Trial-by-Fire During This Morning's 5.4-magnitude Earthquake

PASADENA—Los Angeles reporters and camera crews responding to a 5.4-magnitude earthquake this morning got their first look at the new Caltech/USGS earthquake monitoring system.

The look was not only new but almost instantaneous. Within 15 minutes of the earthquake, Caltech seismologists had already printed out a full-color poster-sized map of the region to show on live TV, and had already posted the contour map on the Internet. Moreover, they were able to determine the magnitude of the event within five minutes — a tremendous improvement over the time it once took to confirm data.

"Today, we had a much better picture of how the ground responded to the earthquake than we've ever had in the past," said Dr. Lucile Jones, a U.S. Geological Survey seismologist who is stationed at Caltech. "This was the largest earthquake we've had since September of 1995, and was the first time we've been able to use the new instruments that we're still installing."

The new instruments are made possible by the TriNet Project, a $20.75-million initiative for providing a state-of-the-art monitoring network for Southern California. A scientific collaboration between Caltech, the USGS and the California Department of Conservation's Division of Mines and Geology, the project is designed to provide real-time earthquake monitoring and, ultimately, to lead to early-warning technology to save lives and mitigate urban damage after earthquakes occur.

"The idea of Trinet was to get quick locations and magnitudes out, to get quick estimates of the distribution of the ground shaking, and a prototype early-warning system," Caltech seismic analyst Egill Hauksson said an hour after this morning's earthquake. "The first two of those things are already in progress. We are in the midst of deploying hardware in the field and developing data-processing software." TriNet was announced earlier this year when funding was approved by the Federal Emergency Management Agency. The new system relies heavily on recent advances in computer communications technology and data processing.

The map printed out this morning (the ShakeMap) is just a preview of future TriNet products. Caltech seismologist Kate Hutton gave a number of TV interviews in front of the map this morning. The map was noteworthy not only for the speed in which it was produced, but also for the manner in which information about the earthquake was relayed.

Instead of charting magnitudes, the map was drawn in such a way that the velocity the ground moved was shown with contour lines. The most rapid movement in the 5.4 earthquake this morning was about two inches per second at the epicenter, and this was clearly indicated in the innermost circle on the color map. Moving outward from the epicenter of the earthquake, the velocity of ground movement decreased, and this was indicated by lower velocity numbers in the outer circles.

The maps can also be printed out to show ground accelerations, which are especially useful for ascertaining likely damage in an earthquake area, Hutton said.

Later, the TriNet will result in prototype early warnings to distant locations in the Los Angeles area that potentially damaging ground shaking is on the way. After an earthquake occurs, the seismic waves travel a few kilometers per second, while communication transmission can travel the speed of light. Thus, Los Angeles could eventually receive warning of a major earthquake at the San Andreas fault some 30 to 60 seconds before the heavy shaking actually began in the city.

The total cost of the project is $20.75 million. FEMA will provide $12.75 million, the USGS has provided $4.0 million, and the balance is to be matched by Caltech ($2.5 million) and the DOC ($1.75 million). Several private sector partners, including GTE and Pacific Bell, are assisting Caltech with matching funds for its portion of the TriNet balance.

The TriNet Project is being built upon existing networks and collaborations. Southern California's first digital network began with the installation of seismographs known as TERRAscope, and was made possible by a grant from the L.K. Whittier Foundation and the ARCO Foundation. Also, Pacific Bell through its CalREN Program has provided new frame-relay digital communications technology.

A major step in the modernization came in response to the Northridge earthquake, when the USGS received $4.0 million from funds appropriated by Congress to the National Earthquake Hazard Reduction Program. This money was the first step in the TriNet project and the USGS has been working with Caltech for the last 27 months to begin design and implementation. Significant progress has already been made and new instrumentation is now operational:

o Thirty state-of-the-art digital seismic stations are operating with continuous communication to Caltech/USGS

o Twenty strong-motion sites installed near critical structures

o Two high-rise buildings have been instrumented

o Alarming and processing software have been designed and implemented

o Automated maps of contoured ground shaking are available on the Web within a few minutes after felt and damaging earthquakes (http://www-socal.wr.usgs.gov).

DOC's strong motion network in Southern California is a key component of the TriNet Project, contributing 400 of the network's 650 sensing stations. DOC's network expansion and upgrade through the funding of this project will allow much better information about strong shaking than was possible for the Northridge earthquake. This data is the key to improving building codes for more earthquake-resistant structures.

Writer: 
Robert Tindol
Writer: 

Caltech Question of the Week: Do Earth's Plates Move In a Certain Direction?

Submitted by Frank Cheng, Alhambra, California, and answered by Joann Stock, Associate Professor of Geology and Geophysics, Caltech.

Each plate is moving in a different direction, but the exact direction depends on the "reference frame," or viewpoint, in which you are looking at the motion. The background to this question is the fact that there are 14 major tectonic plates on Earth: the Pacific, North America, South America, Eurasia, India, Australia, Africa, Antarctica, Cocos, Nazca, Juan de Fuca, Caribbean, Philippine, and Arabia.

Each plate is considered to be "rigid," which means that the plate is moving as a single unit on the surface of Earth. We can describe the relative motion between any pair of plates. For example, the North America plate and the Eurasia plate are moving away from each other in the North Atlantic Ocean, resulting in seafloor spreading along the mid-Atlantic ridge, which is the boundary between these two plates. In this case, if you imagine Eurasia to be fixed, the North America plate would be moving west.

But it is equally valid to imagine that the North America plate is fixed, in which case the Eurasia plate would be moving east. If you think about the Pacific–North America plate boundary (along the San Andreas fault in Southern California), the motion of the North America plate is different; the North America plate is moving southeast relative to the Pacific plate.

This doesn't mean that the North America plate is moving in different directions at once. The difference is due to the change of reference frame, from the Eurasia plate to the Pacific plate.

Sometimes we describe plate motions in terms of other reference frames that are independent of the individual plates, such as some external (celestial) reference frame or more slowly moving regions of Earth's interior. In this case, each plate has a unique motion, which may change slowly over millions of years.

Technically, the plate motion in any reference frame is described by an angular velocity vector. This corresponds to the slow rotation of the plate about an axis that goes from Earth's center along an imaginary line to the "pole" of rotation somewhere on Earth's surface.

Writer: 
Robert Tindol
Writer: 

Researchers Establish Upper Limit of Temperature at the Core-mantle Boundary of Earth

PASADENA— Researchers at the California Institute of Technology have determined that Earth's mantle reaches a maximum temperature of 4,300 degrees Kelvin. The results are reported in the March 14, 1997, issue of the journal Science.

According to geophysics professor Tom Ahrens and graduate student Kathleen Holland, the results are important for setting very reliable bounds on the temperature of Earth's interior. Scientists need to know very precisely the temperature at various depths in order to better understand large-scale processes such as plate tectonics and volcanic activity, which involves movement of molten rock from the deep interior of the Earth to the surface.

"This nails down the maximum temperature of the lower-most mantle, a rocky layer extending from a depth of 10 to 30 kilometers to a depth of 2900 kilometers, where the molten iron core begins," Ahrens says. "We know from seismic data that the mantle is solid, so it has to be at a lower temperature than the melting temperature of the materials that make it up."

In effect, the research establishes the melting temperature of the high-pressure form of the crystal olivine. At normal pressures, olivine is known by the formula (Mg,Fe)2SiO4, and is a semiprecious translucent green gem. At very high pressures, olivine breaks down into magnesiowüstite and a mineral with the perovskite structure. Together these two minerals are thought to make up the bulk of the materials in the lower mantle.

The researchers achieved these ultra-high pressures in their samples by propagating a shock wave into them, using a high-powered cannon apparatus, called a light-gas gun. This gun launches projectiles at speeds of up to 7 km/sec. Upon impact with the sample, a strong shock wave causes ultra-high pressures to be achieved for only about one-half a millionth of a second. The researchers have established the melting temperature at a pressure of 1.3 million atmospheres. This is the pressure at the boundary of the solid lower mantle and liquid outer core.

"We have replicated the melting which we think occurs in the deepest mantle of the Earth," says Holland, a doctoral candidate in geophysics at Caltech. "This study shows that material in the deep mantle can melt at a much lower temperature than had been previously estimated. It is exciting that we can measure phase transitions at these ultra-high pressures."

The researchers further note that the temperature of 4,300 degrees would allow partial melting in the lowest 40 kilometers or so of the lower mantle. This agrees well with seismic analysis of wave forms conducted in 1996 by Caltech Professor of Seismology, Donald Helmberger, and his former graduate student, Edward Garnero. Their research suggests that at the very lowest reaches of the mantle there is a partially molten layer, called the Ultra-Low-Velocity-Zone.

"We're getting into explaining how such a thin layer of molten rock could exist at great depth," says Ahrens. "This layer may be the origin layer that feeds mantle plumes, the volcanic edifices such as the Hawaiian island chain and Iceland. "We want to understand how Earth works."

Writer: 
Robert Tindol
Writer: 

Caltech Geologists Find New Evidence That Martian Meteorite Could Have Harbored Life

PASADENA—Geologists studying Martian meteorite ALH84001 have found new support for the possibility that the rock could once have harbored life.

Moreover, the conclusions of California Institute of Technology researchers Joseph L. Kirschvink and Altair T. Maine, and McGill University's Hojatollah Vali, also suggest that Mars had a substantial magnetic field early in its history.

Finally, the new results suggest that any life on the rock existing when it was ejected from Mars could have survived the trip to Earth.

In an article appearing in the March 13 issue of the journal Science, the researchers report that their findings have effectively resolved a controversy about the meteorite that has raged since evidence for Martian life was first presented in 1996. Even before this report, other scientists suggested that the carbonate globules containing the possible Martian fossils had formed at temperatures far too hot for life to survive. All objects found on the meteorite, then, would have to be inorganic.

However, based on magnetic evidence, Kirschvink and his colleagues say that the rock has certainly not been hotter than 350 degrees Celsius in the past four billion years—and probably has not been above the boiling point of water. At these low temperatures, bacterial organisms could conceivably survive.

"Our research doesn't directly address the presence of life," says Kirschvink. "But if our results had gone the other way, the high-temperature scenario would have been supported."

Kirschvink's team began their research on the meteorite by sawing a tiny sample in two and then determining the direction of the magnetic field held by each. This work required the use of an ultrasensitive superconducting magnetometer system, housed in a unique, nonmagnetic clean lab facility. The team's results showed that the sample in which the carbonate material was found had two magnetic directions—one on each side of the fractures.

The distinct magnetic directions are critical to the findings, because any weakly magnetized rock will reorient its magnetism to be aligned with the local field direction after it has been heated to high temperatures and cooled. If two such rock fragments are attached so that their magnetic directions are separate, but are then heated to a certain critical temperature, they will have a uniform direction.

The igneous rock (called pyroxenite) that makes up the bulk of the meteorite contains small inclusions of magnetic iron sulfide minerals that will entirely realign their field directions at about 350°C, and will partially align the field directions at much lower temperatures. Thus, the researchers have concluded that the rock has never been heated substantially since it last cooled some four billion years ago.

"We should have been able to detect even a brief heating event over 100 degrees Celsius," Kirschvink says. "And we didn't."

These results also imply that Mars must have had a magnetic field similar in strength to that of the present Earth when the rock last cooled. This is very important for the evolution of life, as the magnetic field will protect the early atmosphere of a planet from being sputtered away into space by the solar wind. Mars has since lost its strong magnetic field, and its atmosphere is nearly gone.

The fracture surfaces on the meteorite formed after it cooled, during an impact event on Mars that crushed the interior portion. The carbonate globules that contain putative evidence for life formed later on these fracture surfaces, and thus were never exposed to high temperatures, even during their ejection from the Martian surface nearly 15 million years ago, presumably from another large asteroid or comet impact.

A further conclusion one can reach from Kirschvink's work is that the inside of the meteorite never reached high temperatures when it entered Earth's atmosphere. This means, in effect, that any remaining life on the Martian meteorite could have survived the trip from Mars to Earth (which can take as little as a year, according to some dynamic studies), and could have ridden the meteorite down through the atmosphere by residing in the interior cracks of the rock and been deposited safely on Earth.

"An implication of our study is that you could get life from Mars to Earth periodically," Kirschvink says. "In fact, every major impact could do it." Kirschvink's suggested history of the rock is as follows:

The rock crystallized from an igneous melt some 4.5 billion years ago and spent about half a billion years on the primordial planet, being subjected to a series of impact-related metamorphic events, which included formation of the iron sulfide minerals.

After final cooling in the ancient Martian magnetic field about four billion years ago, the rock would have had a single magnetic field direction. Following this, another impact crushed parts of the meteorite without heating it, and caused some of the grains in the interior to rotate relative to each other. This led to a separation of their magnetic directions and produced a set of fracture cracks. Aqueous fluids later percolated through these cracks, perhaps providing a substrate for the growth of Martian bacteria. The rock then sat more or less undisturbed until a huge asteroid or comet smacked into Mars 15 million years ago. The rock wandered in space until about 13,000 years ago, when it fell on the Antarctic ice sheet.

Writer: 
Robert Tindol
Writer: 

Scientists Find "Good Intentions" in the Brain

PASADENA—Neurobiologists at the California Institute of Technology have succeeded in peeking into one of the many "black boxes" of the primate brain. A study appearing in the March 13 issue of the journal Nature describes an area of the brain where plans for actions are formed.

It has long been known that we gain information through our senses and then respond to our world with actions via body movements. Our brains are organized accordingly, with some sections processing incoming sensory signals such as sights and sounds, and other sections regulating motor outputs such as walking, talking, looking, and reaching. What has puzzled scientists, however, is where in the brain thought is put into action. Presumably there must be an area between the sensory incoming areas and the motor outputting areas that decides or determines what we will do next.

Richard Andersen, James G. Boswell Professor of Neuroscience at Caltech, along with Senior Research Fellow Larry Snyder and graduate student Aaron Batista, chose the posterior parietal cortex as the likely candidate to perform such decisions. This is a high-functioning cognitive area and is the endpoint of what scientists call the visual "where" pathway. Lesions to the parietal cortex of humans result in loss of the ability to appreciate spatial relationships and to navigate accurately.

As Michael Shadlen of the University of Washington says in theNature "News and Views" commentary on the latest findings, "Nowhere in the brain is the connection between body and mind so conspicuous as in the parietal lobes—damage to the parietal cortex disrupts awareness of one's body and the space that it inhabits."

It is here, Andersen postulates, that incoming sensory signals overlap with outgoing movement commands, and it is here where decisions and planning occur. Numerous investigations had assumed a sensory map of external space must exist within the parietal cortex, so that certain subsections would be responsible for certain spatial locations of objects such as "up and to the left" or "down and to the right." Previous results from Andersen's own lab however had led him to question whether absolute space was the driving feature of the posterior parietal map or whether, instead, the intended movement plan was the determining factor in organizing the area.

In a series of experiments designed so that the scientists could "listen in" on the brain cells of monkeys at work, the animals were taught to watch a signal light and, depending on its color, to either reach to or look at the target. When the signal was green they were to reach and when it was red they were only to look at the target. An important additional twist to the study was that the monkeys had to withhold their responses for over a second.

The scientists measured neural activity during this delay when the monkeys had planned the movement but not yet made it. What they found was that different cells within different regions of the posterior parietal cortex became active, depending not so much on where the objects were but rather on which movements were required to obtain them. It seems then that the same visual input activates different subareas depending on how the animal plans to respond.

According to Andersen, this result shows that the pathway through the visual cortex that tells us where things are, ends in a map of intention rather than a map of sensory space as had been previously thought. According to Shadlen these results are intriguing because they indicate that "for the brain, spatial location is not a mathematical abstraction or property of a (sensory) map, but involves the issue of how the body navigates its hand or gaze." Andersen feels the study is important because it demonstrates that "our thoughts are more directly tied to our actions than we had previously imagined, and the posterior parietal cortex appears to be organized more around our intentions than our sensations."

Writer: 
Robert Tindol
Writer: 

Caltech Chemists Design Molecule To Repair a Type of DNA Damage

PASADENA—Chemists have found a way to repair DNA molecules that have been damaged by ultraviolet radiation. The research is reported in the March 7, 1997, issue of the journal Science.

In the cover article, California Institute of Technology Professor of Chemistry Jacqueline K. Barton and her coworkers Peter J. Dandliker, a postdoctoral associate, and R. Erik Holmlin, a graduate student, report that the new procedure reverses thymine dimers, a well-known type of DNA abnormality caused by exposure to ultraviolet light. By designing a synthetic molecule containing rhodium, the researchers have succeeded in repairing the damage and returning the DNA to its normal state.

The research is also significant in that the rhodium complex can be attached to the end of the DNA strand and repair the damaged site even when it is much farther up the helix.

"What I think is exciting is that we can use the DNA to carry out chemistry at a distance," says Barton. "What we're really doing is transferring information along the helix."

A healthy DNA molecule appears something like a twisted ladder. The two "rails" of the ladder, the DNA backbone are connected with "rungs," the DNA bases adenine, thymine, cytosine and guanine, which are paired together in units called base pairs to form the helical stack.

Thymine dimers occur when two neighboring thymines on the same strand become linked together. The dimer, once formed, leads to mutations because of mispairings when new DNA is made. If the thymine dimers are not repaired, mutations and cancer can result.

The new method repairs the thymine dimers at the very first stage, before mutations can develop. The rhodium complex is exposed to normal visible light, which triggers an electron transfer reaction to repair the thymine dimer. The rhodium complex can either act locally on a thymine dimer lesion on the DNA strand, or can be tethered to the end of the DNA helix to work at a distance.

In the latter case, the electron works its way through the stack of base pairs. The repair efficiency doesn't decrease as the tether point is moved away from the site of damage, the researchers have found. However, the efficiency of the reaction is diminished when the base pair stack, the pathway for electron transfer, is disrupted.

"This argues that the radical, or electron hole, is migrating through the base pairs," Barton says. "Whether electron transfer reactions on DNA also occur in nature is something we need to find out. We have found that this feature of DNA allows one to carry out chemical reactions from a distance."

Barton cautions that the discovery does not represent a new form of chemotherapy. However, the research could point to new protocols for dealing with the molecular changes that precede mutations and cancer.

"This could give us a framework to consider new strategies," she says. This research was funded by the National Institutes of Health. Dandliker is a fellow of the Cancer Research Fund of the Damon Runyon-Walter Winchell Foundation, and Holmlin is a National Science Foundation predoctoral fellow.

Writer: 
Robert Tindol
Writer: 

Pages

Subscribe to RSS - research_news