Caltech Named Recipient of Federal Computational Science and Simulation Contract

WASHINGTON—The California Institute of Technology has been awarded a multimillion-dollar contract as part of a major new Department of Energy (DOE) effort to advance computational modeling.

The five-year contract to Caltech is one of five announced at a press conference in Washington today by DOE Secretary Federico Peña as part of the new 10-year, $250 million Academic Strategic Alliances Program (ASAP) of the Accelerated Strategic Computing Initiative (ASCI). The goal of the ASCI research program is to ensure the safety and reliability of America's nuclear stockpile without actual nuclear testing.

Academic institutions chosen to participate in the ASAP program will not be involved in research related to nuclear weapons. Rather, each university will pursue the simulation of an overarching application and will collaborate with the national laboratories in developing the computational science and infrastructure required for "virtual testing." In the process, scientists say, the program will also pave the way for significant advances in a host of peacetime applications requiring high-performance computing.

"President Clinton has challenged us to find a way to keep our nuclear stockpile safe, reliable and secure without nuclear testing," said Secretary Peña. "We're going to meet his challenge through computer simulations that verify the safety, reliability and performance of our nuclear weapons stockpile. I believe these Alliances will produce a flood of new technologies and ideas that will improve the quality of our lives and boost our economy. In fact — with the Academic Strategic Alliance Program in place — Americans will begin to see the results, as the acronym suggests, ASAP."

Caltech's role in the ASCI-ASAP initiative will be to model the response of materials to intense shock waves caused by explosions or impact at high velocity. According to faculty participants, the research will be of great benefit to a number of civilian applications where the behavior of materials exposed to shock waves is important.

Professor Steven Koonin, Caltech's vice president and provost and a professor of theoretical physics, commented that "this grant will enable Caltech researchers to advance the frontiers of large-scale computer simulation, to develop the algorithms and software that can exploit the extraordinarily capable hardware available.

"It is also important that our ASCI effort will educate students in broadly applicable simulation technology," Koonin added. "And by strengthening Caltech's ties with the national laboratories, the Institute will be contributing to the major national goal of science-based stockpile stewardship."

Dan Meiron, professor of applied mathematics at Caltech and principal investigator of the project, said that "the ASAP research program is unique in that by posing the challenge of developing the large-scale modeling and simulation capability required to address our particular overarching application, ASCI pushes multidisciplinary research to a new level."

Dr. Paul Messina, Caltech's assistant vice president for scientific computing and director of the Center for Advanced Computing Research (CACR), said that the ASCI initiative is an important step toward computational fidelity. "The exciting thing for me is the tremendous progress we'll make in computational science and engineering.

"This major project is unique in that it requires the integration of software components developed by researchers from a number of disciplines." Messina added that the ASCI initiative will lead quickly to advances in both computer hardware and software.

"The proposed research will involve all three of the state-of-the-art ASCI-class machines," Messina said, adding that these three computers are located at the Livermore, Sandia, and Los Alamos national labs. The first of the machines that was completed, which is located at Sandia, recently became the first computer to complete a trillion numerical computations in a second.

"Such computational power is vital for success of the ASCI initiative, Messina said. "The data sets generated by these computations are very large. A big part of the program is how to manage those computational resources optimally when you have thousands of processors, and how to support one overarching application when you have a large variety of length and time scales."

Caltech's proposal to the DOE outlines the construction of "a virtual shock physics facility in which the full three-dimensional response of a variety of target materials can be computed from a wide range of compressive, tensional, and sheer loadings including those loadings produced by detonation of energetic materials." Goals of the research will include improving the ability to compute experiments employing shock and detonation waves, computing the dynamic response of materials to the waves, and validating the computations against real-world experimental data.

These shock waves will be simulated as they pass through various phases (i.e., gas, liquid, and solid). The work could have applications for the synthesis of new materials or the interactions of explosions with structures. The work will also provide lab scientists in the federal Science-Based Stockpile Stewardship (SBSS) program a tool to simulate high-explosive detonation and ignition.

The ASCI-ASAP program at Caltech will involve the research groups of 18 Caltech professors from across the campus, including Tom Ahrens, a geophysicist; Joe Shepherd, an aeronautics engineer; Oscar Bruno, an applied mathematician; William Goddard, a chemist; Tom Tombrello, a physicist; and Mani Chandy and Peter Schröder, computer scientists. James Pool, deputy director of the CACR, will serve as executive director for the project.

Caltech is the lead university of the ASCI-ASAP contract to simulate the dynamic response of materials. Also participating with Caltech in this project are the Carnegie Institute of Washington, Brown University, the University of Illinois, Indiana University, and the University of Tennessee.

The other schools to receive ASCI-ASAP contracts are the University of Chicago, the University of Utah, Stanford University, and the University of Illinois at Urbana-Champagne.

Caltech Question of the Week: How does molten lava in the center of Earth replenish itself?

Question of the Month Submitted by Greg McNeil, Monrovia, California.

Answered by Thomas Ahrens, professor of geophysics, Caltech.

The replenishment of lava—the molten rock which flows out on the surface from the rocky silicate mantle of the Earth, back into the Earth—is a key process that appears to be unique to our planet (relative to the other silicate mantle planets with iron cores: Mars, Venus, and Mercury).

Basalt is the most prevalent lava type found on Earth. This is true also for the other above-mentioned planets. Basalt rock, which has a specific gravity 2.7 to 3.1 times denser than that of water, consists of two key mineral groups: plagioclase, which contains mostly calcium, sodium, potassium, aluminum, silicon, and oxygen; and pyroxene, which contains mostly calcium, magnesium, iron, silicon, and oxygen. These essential minerals react at pressures in the range of 10 to 15 thousand atmospheres to form a denser garnet-bearing rock, called eclogite. Eclogite has a density in the range of 3.4 to 3.5.

Thus, as a large pile of basalt accumulates on the Earth's surface, a process called subduction, or sinking, occurs when the basalt-ecologite transition begins. This causes sections of the crust of Earth containing basalt, or rock of similar composition with thicknesses of 30 km or greater, to transform to eclogite. Because their density is greater than the underlying mantle (3.3), the materials sink, or subduct, into Earth. The basalt-eclogite reaction requires elevated temperatures and some moisture.

If the temperature becomes too high, however, greater than 1,300 degrees centigrade, the dense mineral garnet, the major dense constituent of eclogite, will not be stable and subduction does not occur. This seems to be the case for Venus, which both has a hotter interior and lacks even a fraction of the percentage of water required to assist the basalt-eclogite reaction. This is also true for the sun's nearest planet neighbor, Mercury.

Interestingly, Mars' interior appears to be too cold for eclogite formation. The subduction process is a key element in the process of recycling rock back into the Earth. Earth scientists began to understand this process in the early 1960s, and it is now recognized as a major feature of the theory of plate tectonics. Moreover, exploration of the properties of the other silicate mantle planets with iron cores suggests that only the Earth has active plate tectonics.

Writer: 

Caltech Installs New High-Performance HP Exemplar System at Center for Advanced Computing Research

PASADENA—At a dedication ceremony to be held Monday, June 9, the California Institute of Technology will showcase the most powerful technical computing system developed by the Hewlett-Packard Company, a 256-CPU Exemplar technical server. The Exemplar system, which features peak performance of 184 gigaflops, 64 gigabytes of memory, and one terabyte of attached disk capacity, will serve as the premiere computing resource for Caltech's Center for Advanced Computing Research (CACR) and NASA's Jet Propulsion Laboratory. Computational scientists and engineers at Caltech and JPL will use the system for a number of "grand challenge" research applications in science and engineering, including chemistry, biology, astrophysics, computational fluid dynamics, nuclear physics, geophysics, environmental science, space science, and scientific visualization. For example, the Exemplar will be used to model the world's atmosphere and oceans to better understand climatic shifts, study the dynamics of pollutants in the atmosphere, simulate the evolution of the universe, study the collisions between electrons and molecules that drive the chemistry in plasma reactors used in microelectronics manufacturing, and provide interactive access to large multispectral astronomy databases, creating a "digital sky."

The installation of a 256-processor Exemplar is the first of a three-phase collaborative project between HP and Caltech. The collaboration will include work on a range of systems software areas, such as implementing efficient support of shared-memory programming on a large number of processors. The second phase of the collaboration includes installation of an Exemplar system based on the Intel IA-64 processor. (IA-64 is Intel architecture—64-bit, jointly defined by HP and Intel.) The final project phase provides for expansion of that Exemplar system to provide peak performance of as much as one teraflop (one trillion computations per second) and one terabyte (1,024 gigabytes) of physical memory. "The collaboration between Caltech and HP is a strategic scalable computing partnership that will provide a powerful research resource with a single programming model that can be applied to computational science and engineering applications using UNIX systems, even for programs so large that their execution requires the use of the entire system," said Paul Messina, CACR director and assistant vice president for scientific computing at Caltech. "The Exemplar server is a very powerful system with features that researchers tackling today's ever larger and more complex applications have been seeking for a long time." "NASA scientists at the Jet Propulsion Laboratory will employ the Exemplar system to tackle the most challenging issues in spacecraft design and space science data analysis," said Carl Kukkonen, manager of supercomputing at JPL. According to Kukkonen, the Exemplar will be used to analyze and visualize data from Mars, calculate the precise gravitational fields of Mars and the moon, model the solar wind, conduct high-fidelity modeling of Earth's oceans, process synthetic aperture radar (SAR) images in near real time, and design and simulate new generations of spacecraft. "We are very enthusiastic about collaborating with the innovative team at Caltech to develop the 'commodity teraflops' that the market is looking for," said Steve Wallach, chief technology officer at HP's Convex Division, part of the Enterprise Server Group. "Because of HP's long-term commitment to high-end technical computing, we understand the importance of providing the platforms that advanced researchers require. In technical computing, there is no doubt that the supercomputer performance of today will be the workstation performance of tomorrow and the desktop of the future."

Caltech's Center for Advanced Computing Research conducts multidisciplinary application-driven research in computational science and engineering and participates in a variety of high-performance computing and communications activities. To carry out its mission, the CACR focuses on providing a rich, creative intellectual environment that cultivates multidisciplinary collaborations and on harnessing new technologies to create innovative large-scale computing environments.

JPL's Supercomputing Project has partnered with Caltech for the last decade to provide state-of-the-art computing facilities to enable breakthrough science and engineering for JPL's NASA space missions.

Hewlett-Packard Company is a leading global provider of computing, Internet and Intranet solutions, services, communications products and measurement solutions, all of which are recognized for excellence in quality and support. It is the second-largest computer supplier in the United States, with computer-related revenue in excess of $31.4 billion in its 1996 fiscal year. HP has 114,600 employees and had revenue of $38.4 billion in its 1996 fiscal year.

Note to editors: Dedication ceremonies for the new computing system begin at 10 a.m. in Ramo Auditorium on the Caltech campus. Luncheon reception immediately following; tours of the CACR computing facilities 11:30 a.m. to 3:30 p.m. Information about HP and its products can be found on the World Wide Web at http://www.hp.com. More information on Caltech CACR's research efforts and Caltech scientific and engineering applications can be found on the World Wide Web at http://www.cacr.caltech.edu/research.

Caltech Question of the Week: What Would Be the Effect If All Plate Tectonics Movement Stopped Forever?

Submitted by Jack Collins, Duarte, California, and answered by Kerry Sieh, Professor of Geology, Caltech.

If all plate motion stopped, Earth would be a very different place. The agent responsible for most mountains as well as volcanoes is plate tectonics, so much of the activity that pushes up new mountain ranges and creates new land from volcanic explosions would be no more. The volcanoes of the Pacific Ring of Fire, in South and North America, Japan, the Philippines, and New Zealand, for example, would shut off, and the steady southeastward migration of volcanic activity along the Hawaiian Islands would stop. Volcanism would just continue on the big island. There would also be far fewer earthquakes, since most are due to motion of the plates.

Erosion would continue to wear the mountains down, but with no tectonic activity to refresh them, over a few million years they would erode down to low rolling hills. So the whole planet would be flatter, and the topography would be a heck of a lot less exciting. You'd probably be less inclined to go trekking in Nepal.

One big problem with plate tectonics stopping is that plate motion is the mechanism by which Earth is cooling down and getting rid of its internal heat. If the plates stopped moving, the planet would have to find a new and efficient means to blow off this heat. It's not clear what that mechanism might be.

Caltech Question of the Week: Why Does There Need To Be Water On a Planet or Moon To Have Life?

Question of the Month Submitted by Traci Salazar, 13, Alhambra, California, and answered by Richard Terrile, scientist, Jet Propulsion Laboratory, Caltech.

Water is a tremendously important ingredient in that it's a very good solvent and a very good medium for chemical reactions. It's also very common.

Water is nearly everywhere in the solar system, it's easy to make from two ingredients (hydrogen and oxygen) that are both very common throughout the solar system and the universe, and it can exist in some truly harsh environments.

In fact, the harsh Earth environments in which liquid water is found give us good reason to think that water could be associated with extraterrestrial life. Anywhere you look on Earth, no matter how inhospitable the environment seems, liquid water apparently always harbors life of some sort. This is true for subterranean rocks as well as superheated ocean vents. So since you can find life in such Earth environments, it makes sense that life could also exist in harsh environments elsewhere.

Thus, water is a great substance for conducting chemistry. And life is very sophisticated chemical activity.

Caltech Question of the Week: How Can Different Kinds of Vegetables Contain Different Vitamins When Grown in the Same Soil?

Question of the Month Submitted by Doris Bower, Arcadia, Calif., and answered by Dr. Elliot Meyerowitz, Professor of Biology, Caltech.

Only some vitamins are found in plants—others we must obtain from other sources such as bacteria and yeast, or from animal products. Vitamin B12 is an example of one vitamin that is not found in higher plants. Some plants are good sources of certain vitamins, however, or to be precise, they are good sources either of vitamins (such as vitamin B1 in rice husks or vitamin C in citrus fruits) or of provitamins, which are converted to vitamins in our bodies. An example is beta-carotene, also known as provitamin A, which is converted in our livers to vitamin A. Beta-carotene is abundant in carrots.

There are two answers to your question. The first is that each species of plant has its own methods of regulating the biosynthetic pathways by which they make provitamins, vitamins, and other substances. The plants themselves don't get vitamins from the soil, but they do get the raw materials they need to manufacture these vitamins for their own needs. These raw materials are phosphorus, potassium, nitrates, and about a dozen other elements in lesser quantities, such as iron and magnesium.

Also, the plants take in carbon dioxide from the air, and energy from sunlight. Each species of plant synthesizes different amounts of vitamins and provitamins from the available nutrients, both because of species differences and because plants regulate their biosynthetic pathways in response to the environment. Thus, even in the same environment, different species or varieties of plants will make different amounts of vitamins, and the same variety of plant will make different amounts of vitamins in different environments.

The second answer is that we eat different parts of different plants, and different parts will have different vitamin concentrations. Depending on the vegetable in question, we may eat the leaves, roots, or fruits. So you may get a good dose of vitamin A if you eat the root of the carrot plant. But if you develop a taste for fruits such as squash, you probably won't get nearly as much.

In the case of carrots and provitamin A, there is more to the story—plants synthesize beta-carotene as an aid to photosynthesis, and also as a pigment. As an aid to photosynthesis it is required mainly in leaves, where it is usually found in high concentrations. But beta-carotene is also a pigment; it is the substance that gives most of the orange color to carrots. Since consumers prefer bright orange carrots, plant breeders have deliberately bred carrots that contain high levels of beta-carotene.

Writer: 
Robert Tindol
Writer: 

Caltech Astronomers Crack the Puzzle of Cosmic Gamma-Ray Bursts

Additional Images can be obtained on the Caltech astronomy web site at http://astro.caltech.edu

PASADENA—A team of Caltech astronomers has pinpointed a gamma-ray burst several billion light-years away from the Milky Way. The team was following up on a discovery made by the Italian/Dutch satellite BeppoSAX.

The results demonstrate for the first time that at least some of the enigmatic gamma-ray bursts that have puzzled astronomers for decades are extragalactic in origin.

The team has announced the results in the International Astronomical Union Circular, which is the primary means by which astronomers alert their colleagues of transient phenomena. The results will be published in scientific journals at a later date.

Mark Metzger, a Caltech astronomy professor, said he was thrilled by the result. "When I finished analyzing the spectrum and saw features, I knew we had finally caught it. It was a stunning moment of revelation. Such events happen only a few times in the life of a scientist."

According to Dr. Shri Kulkarni, an astronomy professor at Caltech and another team member, gamma-ray bursts occur a couple of times a day. These brilliant flashes seem to appear from random directions in space and typically last a few seconds.

"After hunting clues to these bursts for so many years, we now know that the bursts are in fact incredibly energetic events," said Kulkarni.

For team member and astronomy professor George Djorgovski, "Gamma-ray bursts are one of the great mysteries of science. It is wonderful to contribute to its unraveling."

The bursts of high-energy radiation were first discovered by military satellites almost 30 years ago, but so far their origin has remained a mystery. New information came in recent years from NASA's Compton Gamma-Ray Observatory satellite, which has so far detected several thousand bursts. Nonetheless, the fundamental question of where the bursts came from remained unanswered.

Competing theories on gamma-ray bursts generally fall into two types: one, which supposes the bursts to originate from some as-yet unknown population of objects within our own Milky Way galaxy, and another, which proposes that the bursts originate in distant galaxies, several billion light-years away. If the latter (as was indirectly supported by the Compton Observatory's observations), then the bursts are among the most violent and brilliant events in the universe.

Progress in understanding the nature of ters had to make an extra effort to identify this counterpart quickly so that the Keck observations could be carried out when the object was bright. The discovery is a major step to help scientists understand the nature of the burst's origin. We now know that for a few seconds the burst was over a million times brighter than an entire galaxy. No other phenomena are known that produce this much energy in such a short time. Thus, while the observations have settled the question of whether the bursts come from cosmological distances, their physical mechanism remains shrouded in mystery.

The Caltech team, in addition to Metzger, Kulkarni, and Djorgovski, consists of professor Charles Steidel, postdoctoral scholars Steven Odewahn and Debra Shepherd, and graduate students Kurt Adelberger, Roy Gal, and Michael Pahre. The team also includes Dr. Dale Frail of the National Radio Astronomy Observatory in Socorro, New Mexico.

Writer: 
Robert Tindol
Writer: 

Question of the Week: Does the earth keep a constant distance from the sun? If not, will the earth get closer to the sun and become more warm?

Question: Does the earth keep a constant distance from the sun? If not, will the earth get closer to the sun and become more warm?

Submitted by Steven S. Showers Newbury Park, California

Answered by Andrew Ingersoll, professor of planetary science, Caltech.

Earth has an eccentric orbit, which means that it moves in a path that is slightly oval in shape. Contrary to what you'd expect, Earth gets closest to the sun every December, and farthest from the sun every June. We in the northerhe maximum eccentricity is about 5 percent and the minimum is near zero, when the orbit is nearly circular. This cycle can be calculated for millions of years, and we know that the glaciers also have cycles of about 100,000 years. The question is whether the glaciers are tied to changes in Earth's eccentricity.

So the bottom line is that Earth does get closer and farther, and it does affect the climate. But the mechanism is not all that clear. Averaged over a year, the distance from the Earth the Sun changes very little, even over billions of years (the Earth is 4.5 billion years old).

What is more important to climatic changes over the eons is the fact that the sun is getting brighter. Earth now gets about 20 to 25 percent more sunlight than it did four billion years ago.

Writer: 
Robert Tindol
Writer: 

Caltech Astronomer Obtains Data That Could Resolve the "Age Problem"

PASADENA — A California Institute of Technology astronomer has obtained data that could resolve the "age problem" of the universe, in which certain stars appear to be older than the universe itself.

Dr. Neill Reid, using information collected by the European Space Agency's Hipparcos satellite, has determined that a key distance measure used to compute the age of certain Milky Way stars is off by 10 to 15 percent. The new data leads to the conclusion that the oldest stars are actually 11 to 13 billion years old, rather than 16 to 18 billion years old, as had been thought.

The new results will be of great interest to cosmologists, Reid says, because estimates of the age of the universe, based on tracking back the current rate of expansion, suggest that the Big Bang occurred no more than about 13 billion years ago. Therefore, astronomers will no longer be confronted with the nettling discrepancy between the ages of stars and the age of the universe.

"This gives us an alternate way of estimating the age of the universe," says Reid. "The ideal situation would be to have the same answer, independently given by stellar modeling and cosmology."

Reid's method focuses on a type of star (known as subdwarfs) found in globular clusters, which are spherical accumulations of hundreds of thousands of individual stars. These have long been known to be among the earliest objects to form in the universe, since the stars are composed mainly of the primordial elements hydrogen and helium, and because the clusters themselves are distributed throughout a sphere 100,000 light-years in diameter, rather than confined, like the sun, within the flattened pancake of the galactic disk. Astronomers can determine quantitative ages for the clusters by measuring the luminosity (the intrinsic brightness) of the brightest sunlike stars in each cluster. Those measurements require that the distances to the clusters be known accurately.

Reid looked at some 30 stars within about 200 light-years of Earth. Using the Hipparcos satellite, he was able to obtain very accurate distances to these stars by the parallax method. Parallax is a common method for determining relatively nearby objects. Just as a tree 10 feet away will seem to shift its position against the distant background when an observer closes one eye and then the other, a nearby star will shift its position slightly if the observer waits six months for Earth to reach the opposite side of its orbit. And if the distance between the two observing sites (the baseline) is known very accurately, the observer can then compute the distance to the object by treating the object and the two observing sites as a giant triangle.

Reid chose the 30 stars for special study (out of the 100,000 for which Hipparcos obtained parallax data) because they, like the globular cluster stars, are composed primarily of hydrogen and helium. Thus, these stars also can be assumed to be very old, and may indeed themselves once have been members of globulars that were torn apart as they orbited the galaxy.

Once distances have been measured, these nearby stars act as standard candles whose brightness can be compared to similar stars in the globular clusters. While this is a well-known technique, older investigations were only able to use lower-accuracy, pre-Hipparcos parallaxes for 10 of the 30 stars.

Reid's conclusion is that the clusters are about 10 to 15 percent farther from Earth than previously thought. This, in turn, means that the stars in those clusters are actually about 20 percent brighter than previously thought, because luminosity falls off as distance increases. Brighter stars have shorter lifetimes, so this means that the clusters themselves must be younger than once assumed.

British astronomers Michael Feast and Robin Catchpole recently arrived at very similar conclusions, also based on new data from Hipparcos, but using a different, and less direct, line of argument. They used new measurements of a type of variable known as Cepheids to determine a revised distance to the Large Magellanic Cloud, a galaxy orbiting the Milky Way.

Feast and Catchpole used another type of variable star, the RR Lyrae variables, to bridge between the LMC and globular clusters. The fact that these two independent methods give the same answer makes that answer more believable, says Reid. "Most people previously believed that 14 billion years was the youngest age you could have for these stars," Reid says. "I think it's now accurate to say that the oldest you could make them is 14 billion years.

"No longer are we faced with the paradox of a universe younger than its stellar constituents," says Reid.

The work is set to appear in July in the Astrophysical Journal.

Writer: 
Robert Tindol
Writer: 

Caltech A Major Partner in National Program To Develop Advanced Computational Infrastructure

PASADENA, California — The California Institute of Technology (Caltech) will play three key roles in the National Partnership for Advanced Computational Infrastructure (NPACI) in the areas of management, resource deployment, and technology and application initiatives. The NPACI program is one of two partnerships each awarded approximately $170 million over five years in the National Science Foundation's Partnerships for Advanced Computational Infrastructure (PACI) slated to begin October 1, 1997.

The NPACI partnership, led by the University of California, San Diego (UCSD), includes Caltech and 36 other leaders in high-performance computing. In NPACI, leading-edge scientists and engineers from 18 states will develop algorithms, libraries, system software, and tools in order to create a national metacomputing infrastructure of the future--one that will provide both teraflops and petabyte capabilility. (A petabyte is equal to one billion megabytes; one teraflops is one trillion computations per second.) A fully supported, teraflops/petabyte-scale metacomputing environment will enable quantitative and qualitative advances in a wide range of scientific disciplines, including astronomy, biochemistry, biology, chemistry, engineering, fluid dynamics, materials science, neuroscience, social science and behavioral science.

Dr. Paul Messina, Caltech's assistant vice president for scientific computing and director of Caltech's Center for Advanced Computing Research (CACR), has been named chief architect of NPACI. He will be responsible for the overall architecture of the project, including interaction mechanisms among the partners; deployment of infrastructure; and balance among partnership hardware and software systems, thrust area projects, and other user needs.

"Caltech is pleased to be a part of this historic initiative," said Thomas Everhart, president of Caltech. "Paul Messina and the CACR have made it possible for the Institute to contribute to the further development of our national computing infrastructure."

While Caltech will contribute to a variety of software development projects as a research partner, as a resource partner, the Institute will provide national access to some of the hardware used to facilitate the development of software infrastructure to link computers, data servers, and archival storage systems to enable easier use of the aggregate computing power. The NPACI award builds on the longstanding partnership between Caltech and NASA's Jet Propulsion Laboratory (JPL) in the area of high-performance computing. With both NSF and NASA support, Caltech will acquire a succession of parallel computers from Hewlett-Packard's Convex Division, including serial #1 of the HP/Convex SPP 3000, to be installed in 1999. Caltech's first machine, already housed on campus, will be a 256-processor Convex SPP 2000, with a peak speed of 184 Gflops, 64 GBytes of memory, and 1 TByte of online disk. (A 60-terabyte HPSS-based archival storage system will also be available to NPACI.)

"NPACI provides the opportunity to build a computational infrastructure that will enable scientific breakthroughs and new modes of computing," stated Dr. Messina. "As chief architect for the partnership, I look forward to the synergistic coupling of so many excellent scientists dedicated to creating an infrastructure that will profoundly impact future scientific endeavor by providing unprecedented new computational capabilities."

The NPACI metacomputing environment will consist of geographically separated, heterogeneous, high-performance computers, data servers, archival storage, and visualization systems linked together by high-speed networks so that their aggregate power may be applied to research problems that cannot be studied any other way. This environment will be extended to support "data-intensive computing." To that end, infrastructure will be developed to enable--for the first time--the analysis of multiple terabyte-sized data collections.

Installation of the Caltech HP Exemplar was made possible by a collaborative agreement between Caltech and Hewlett-Packard that will result in new technology, tools, and libraries to support the type of multidisciplinary research and metacomputing environment exemplified by the new NPACI award.

By providing access to new and emerging technologies for computing, networking, data storage, data acquisition, and archival functions as part of NPACI, Caltech's CACR will continue to pursue its application-driven approach to use and integrate new hardware and software technologies. In addition to the Caltech resources, UC Berkeley and UCSD will also provide alternate architectures initially to NPACI. These alternate architectures will be to explore alternate performance or price/performance regimes, facilitate porting of software, assure competitive pricing from vendors, and provide a ready migration path should one of the alternatives become preeminent.

As the infrastructure enhancements needed to create the computing environment of the future include far more than hardware and network connections, Caltech is also contributing to NPACI through its participation in "thrust area teams," which will focus on key technologies and applications required for metacomputing, such as data-intensive computing; adaptable, scalable tools/environments; and interaction environments. Many aspects of the NPACI technology thrusts build upon projects initiated and led by Caltech's CACR, including the CASA Gigabit Testbed and the Scalable I/O Initiative.

The Digital Sky Project, led by Caltech professor Thomas A. Prince, CACR associate director, will be a primary data-intensive computing effort in NPACI. This innovative project will make early use of the NPACI resources. Large-area digital sky surveys are a recent and exciting development in astronomy. The combination of the NPACI Tflops/Tbyte computational resources with the recent large-area sky surveys supported by NSF and NASA at optical, infrared, and radio wavelengths will provide unparalleled new capability for astronomical research. The Digital Sky Project, anticipated to be used by the entire astronomical community of over 5,000 scientists and students, will provide a qualitatively different computational infrastructure for astronomical research.

Dr. Carl Kukkonen, manager of supercomputing at JPL, reports that NASA is excited about the opportunity to work with NPACI at the forefront of computing technologies. "We will use these computing resources to tackle the most challenging issues in spacecraft design and space science data analysis," said Kukkonen.

Rounding out Caltech participation in NPACI are efforts by Caltech faculty in three different NPACI applications thrusts: engineering, neuroscience, and molecular science. Infrastructure development, more specifically, in the areas of materials science, brain/neuron modeling, and molecular science will be led by Caltech professors William Goddard, Ferkel Professor of Chemistry and Applied Physics; James Bower, Associate Professor of Biology; and Aron Kuppermann, Professor of Chemical Physics, respectively.

More details on NPACI can be obtained at http://cacse.ucsd.edu/npaci.html. See http://hpcc997.external.hp.com/pressrel/nov96/18nov96h.htm for a press release on the Caltech-HP/Convex collaboration.

Writer: 
Robert Tindol
Writer: 

Pages

Subscribe to RSS - research_news