Physicists Achieve Quantum Entanglement Between Remote Ensembles of Atoms

PASADENA, Calif.—Physicists have managed to "entangle" the physical state of a group of atoms with that of another group of atoms across the room. This research represents an important advance relevant to the foundations of quantum mechanics and to quantum information science, including the possibility of scalable quantum networks (i.e., a quantum Internet) in the future.

Reporting in the December 8 issue of the journal Nature, California Institute of Technology physicist H. Jeff Kimble and his colleagues announce the first realization of entanglement for one "spin excitation" stored jointly between two samples of atoms. In the Caltech experiment, the atomic ensembles are located in a pair of apparatuses 2.8 meters apart, with each ensemble composed of about 100,000 individual atoms.

The entanglement generated by the Caltech researchers consisted of a quantum state for which, when one quantum spin (i.e., one quantum bit) flipped for the atoms at the site L of one ensemble, invariably none flipped at the site R of the other ensemble, and when one spin flipped at R, invariably none flipped at L. Yet, remarkably, because of the entanglement, both possibilities existed simultaneously.

According to Kimble, who is the Valentine Professor and professor of physics at Caltech, this research significantly extends laboratory capabilities for entanglement generation, with now-entangled "quantum bits" of matter stored with separation several thousand times greater than was heretofore possible.

Moreover the experiment provides the first example of an entangled state stored in a quantum memory that can be transferred from the memory to another physical system (in this case, from matter to light). Since the work of Schrödinger and Einstein in the 1930s, entanglement has remained one of the most profound aspects and persistent mysteries of quantum theory. Entanglement leads to strong correlations between the various components of a physical system, even if those components are very far apart. Such correlations cannot be explained by classical physics and have been the subject of active experimental investigation for more than 40 years, including pioneering demonstrations that used entangled states of photons, carried out by John Clauser (son of Caltech's Millikan Professor of Engineering, Emeritus, Francis Clauser).

In more recent times, entangled quantum states have emerged as a critical resource for enabling tasks in information science that are otherwise impossible in the classical realm of conventional information processing and distribution. Some tasks in quantum information science (for instance, the implementation of scalable quantum networks) require that entangled states be stored in massive particles, which was first accomplished for trapped ions separated by a few hundred micrometers in experiments at the National Institute of Standards and Technology in Boulder, Colorado, in 1998.

In the Caltech experiment, the entanglement involves "collective atomic spin excitations." To generate such excitations, an ensemble of cold atoms initially all in level "a" of two possible ground levels is addressed with a suitable "writing" laser pulse. For weak excitation with the write laser, one atom in the sample is sometimes transferred to ground level "b," thereby emitting a photon.

Because of the impossibility of determining which particular atom emitted the photon, detection of this first write photon projects the ensemble of atoms into a state with a single collective spin excitation distributed over all the atoms. The presence (one atom in state b) or absence (all atoms in state a) of this symmetrized spin excitation behaves as a single quantum bit.

To generate entanglement between spatially separated ensembles at sites L and R, the write fields emitted at both locations are combined together in a fashion that erases any information about their origin. Under this condition, if a photon is detected, it is impossible in principle to determine from which ensemble's L or R it came, so that both possibilities must be included in the subsequent description of the quantum state of the ensembles.

The resulting quantum state is an entangled state with "1" stored in the L ensemble and "0" in the R ensemble, and vice versa. That is, there exist simultaneously the complimentary possibilities for one spin excitation to be present in level b at site L ("1") and all atoms in the ground level a at site R ("0"), as well as for no spin excitations to be present in level b at site L ("0") and one excitation to be present at site R ("1").

This entangled state can be stored in the atoms for a programmable time, and then transferred into propagating light fields, which had not been possible before now. The Caltech researchers devised a method to determine unambiguously the presence of entanglement for the propagating light fields, and hence for the atomic ensembles.

The Caltech experiment confirms for the first time experimentally that entanglement between two independent, remote, massive quantum objects can be created by quantum interference in the detection of a photon emitted by one of the objects.

In addition to Kimble, the other authors are Chin-Wen Chou, a graduate student in physics; Hugues de Riedmatten, Daniel Felinto, and Sergey Polyakov, all postdoctoral scholars in Kimble's group; and Steven J. van Enk of Bell Labs, Lucent Technologies.

Writer: 
Robert Tindol
Writer: 

World Network Speed Record Shattered for Third Consecutive Year

Caltech, SLAC, Fermilab, CERN, Michigan, Florida, Brookhaven, Vanderbilt and Partners in the UK, Brazil, Korea and Japan Set 131.6 Gigabit Per Second Mark During the SuperComputing 2005 Bandwidth Challenge

SEATTLE, Wash.—An international team of scientists and engineers for the third consecutive year has smashed the network speed record, moving data along at an average rate of 100 gigabits per second (Gbps) for several hours at a time. A rate of 100 Gbps is sufficient for transmitting five feature-length DVD movies on the Internet from one location to another in a single second.

The winning "High-Energy Physics" team is made up of physicists, computer scientists, and network engineers led by the California Institute of Technology, the Stanford Linear Accelerator Center (SLAC), Fermilab, CERN, and the University of Michigan and partners at the University of Florida, Vanderbilt, and the Brookhaven National Lab, as well as international participants from the UK (University of Manchester and UKLight), Brazil (Rio de Janeiro State University, UERJ, and the State Universities of São Paulo, USP and UNESP), Korea (Kyungpook National University, KISTI) and Japan (the KEK Laboratory in Tsukuba), who joined forces to set a new world record for data transfer, capturing first prize at the Supercomputing 2005 (SC|05) Bandwidth Challenge (BWC).

The HEP team's demonstration of "Distributed TeraByte Particle Physics Data Sample Analysis" achieved a peak throughput of 151 Gbps and an official mark of 131.6 Gbps measured by the BWC judges on 17 of the 22 optical fiber links used by the team, beating their previous mark for peak throughput of 101 Gbps by 50 percent. In addition to the impressive transfer rate for DVD movies, the new record data transfer speed is also equivalent to serving 10,000 MPEG2 HDTV movies simultaneously in real time, or transmitting all of the printed content of the Library of Congress in 10 minutes.

The team sustained average data rates above the 100 Gbps level for several hours for the first time, and transferred a total of 475 terabytes of physics data among the team's sites throughout the U.S. and overseas within 24 hours. The extraordinary data transport rates were made possible in part through the use of the FAST TCP protocol developed by Associate Professor of Computer Science and Electrical Engineering Steven Low and his Caltech Netlab team, as well as new data transport applications developed at SLAC and Fermilab and an optimized Linux kernel developed at Michigan.

Professor of Physics Harvey Newman of Caltech, head of the HEP team and US CMS Collaboration Board Chair, who originated the LHC Data Grid Hierarchy concept, said, "This demonstration allowed us to preview the globally distributed Grid system of more than 100 laboratory and university-based computing facilities that is now being developed in the U.S., Latin America, and Europe in preparation for the next generation of high-energy physics experiments at CERN's Large Hadron Collider (LHC) that will begin operation in 2007.

"We used a realistic mixture of streams, including the organized transfer of multiterabyte datasets among the laboratory centers at CERN, Fermilab, SLAC, and KEK, plus numerous other flows of physics data to and from university-based centers represented by Caltech, Michigan, Florida, Rio de Janeiro and São Paulo in Brazil, and Korea, to effectively use the remainder of the network capacity.

"The analysis of this data will allow physicists at CERN to search for the Higgs particles thought to be responsible for mass in the universe, supersymmetry, and other fundamentally new phenomena bearing on the nature of matter and space-time, in an energy range made accessible by the LHC for the first time."

The largest physics collaborations at the LHC, CMS and ATLAS each encompass more than 2,000 physicists and engineers from 160 universities and laboratories. In order to fully exploit the potential for scientific discoveries, the many petabytes of data produced by the experiments will be processed, distributed, and analyzed using a global Grid. The key to discovery is the analysis phase, where individual physicists and small groups repeatedly access, and sometimes extract and transport terabyte-scale data samples on demand, in order to optimally select the rare "signals" of new physics from potentially overwhelming "backgrounds" of already-understood particle interactions. This data will amount to many tens of petabytes in the early years of LHC operation, rising to the exabyte range within the coming decade.

Matt Crawford, head of the Fermilab network team at SC|05 said, "The realism of this year's demonstration represents a major step in our ability to show that the unprecedented systems required to support the next round of high-energy physics discoveries are indeed practical. Our data sources in the bandwidth challenge were some of our mainstream production storage systems and file servers, which are now helping to drive the searches for new physics at the high-energy frontier at Fermilab's Tevatron, as well the explorations of the far reaches of the universe by the Sloan Digital Sky Survey."

Les Cottrell, leader of the SLAC team and assistant director of scientific computing and computing services, said, "Some of the pleasant surprises at this year's challenge were the advances in throughput we achieved using real applications to transport physics data, including bbcp and xrootd developed at SLAC. The performance of bbcp used together with Caltech's FAST protocol and an optimized Linux kernel developed at Michigan, as well as our xrootd system, were particularly striking. We were able to match the performance of the artificial data transfer tools we used to reach the peak rates in past years."

Future optical networks incorporating multiple 10 Gbps links are the foundation of the Grid system that will drive the scientific discoveries. A "hybrid" network integrating both traditional switching and routing of packets and dynamically constructed optical paths to support the largest data flows is a central part of the near-term future vision that the scientific community has adopted to meet the challenges of data-intensive science in many fields. By demonstrating that many 10 Gbps wavelengths can be used efficiently over continental and transoceanic distances (often in both directions simultaneously), the high-energy physics team showed that this vision of a worldwide dynamic Grid supporting many terabyte and larger data transactions is practical.

Shawn McKee, associate research scientist in the University of Michigan Department of Physics and leader of the UltraLight Network technical group, said, "This achievement is an impressive example of what a focused network effort can accomplish. It is an important step towards the goal of delivering a highly capable end-to-end network-aware system and architecture that meet the needs of next-generation e-science."

The team hopes this new demonstration will encourage scientists and engineers in many sectors of society to develop and plan to deploy a new generation of revolutionary Internet applications. Multigigabit end-to-end network performance will empower scientists to form "virtual organizations" on a planetary scale, sharing their collective computing and data resources in a flexible way. In particular, this is vital for projects on the frontiers of science and engineering in "data intensive" fields such as particle physics, astronomy, bioinformatics, global climate modeling, geosciences, fusion, and neutron science.

The new bandwidth record was achieved through extensive use of the SCInet network infrastructure at SC|05. The team used 15 10 Gbps links to Cisco Systems Catalyst 6500 Series Switches at the Caltech Center for Advanced Computing Research (CACR) booth, and seven 10 Gbps links to a Catalyst 6500 Series Switch at the SLAC/Fermilab booth, together with computing clusters provided by Hewlett Packard, Sun Microsystems, and IBM, and a large number of 10 gigabit Ethernet server interfaces-more than 80 provided by Neterion, and 14 by Chelsio.

The external network connections to Los Angeles, Sunnyvale, the Starlight facility in Chicago, and Florida included the Cisco Research, Internet2/HOPI, UltraScience Net and ESnet wavelengths carried by National Lambda Rail (NLR); Internet2's Abilene backbone; the three wavelengths of TeraGrid; an ESnet link provided by Qwest; the Pacific Wave link; and Canada's CANARIE network. International connections included the US LHCNet links (provisioned by Global Crossing and Colt) between Chicago, New York, and CERN, the CHEPREO/WHREN link (provisioned by LANautilus) between Miami and Sao Paulo, the UKLight link, the Gloriad link to Korea, and the JGN2 link to Japan.

Regional connections included six 10 Gbps wavelengths provided with the help of CIENA to Fermilab; two 10 Gbps wavelengths to the Caltech campus provided by Cisco Systems' research waves across NLR and California's CENIC network; two 10 Gbps wavelengths to SLAC provided by ESnet and UltraScienceNet; three wavelengths between Starlight and the University of Michigan over Michigan Lambda Rail (MiLR); and wavelengths to Jacksonville and Miami across Florida Lambda Rail (FLR). During the test, several of the network links were shown to operate at full capacity for sustained periods.

While the SC|05 demonstration required a major effort by the teams involved and their sponsors, in partnership with major research and education network organizations in the U.S., Europe, Latin America, and Pacific Asia, it is expected that networking on this scale in support of the largest science projects (such as the LHC) will be commonplace within the next three to five years. The demonstration also appeared to stress the network and server systems used, so the team is continuing its test program to put the technologies and methods used at SC|05 into production use, with the goal of attaining the necessary level of reliability in time for the start of the LHC research program.

As part of the SC|05 demonstrations, a distributed analysis of simulated LHC physics data was done using the Grid-enabled Analysis Environment (GAE) developed at Caltech for the LHC and many other major particle physics experiments, as part of the Particle Physics Data Grid (PPDG), GriPhyN/iVDGL, Open Science Grid, and DISUN projects. This involved transferring data to CERN, Florida, Fermilab, Caltech, and Brazil for processing by clusters of computers, and finally aggregating the results back to the show floor to create a dynamic visual display of quantities of interest to the physicists. In another part of the demonstration, file servers at the SLAC/FNAL booth and in Manchester also were used for disk-to-disk transfers between Seattle and the UK.

The team used Caltech's MonALISA (MONitoring Agents using a Large Integrated Services Architecture) system to monitor and display the real-time data for all the network links used in the demonstration. It simultaneously monitored more than 14,000 grid nodes in 200 computing clusters. MonALISA (http://monalisa.caltech.edu) is a highly scalable set of autonomous self-describing agent-based subsystems that are able to collaborate and cooperate in performing a wide range of monitoring tasks for networks and Grid systems, as well as the scientific applications themselves.

The network has been deployed through exceptional support by Cisco Systems, Hewlett Packard, Neterion, Chelsio, Sun Microsystems, IBM, and Boston Ltd., as well as the network engineering staffs of National LambdaRail, Internet2's Abilene Network, ESnet, TeraGrid, CENIC, MiLR, FLR, Pacific Wave, AMPATH, RNP and ANSP/FAPESP in Brazil, KISTI in Korea, UKLight in the UK, JGN2 in Japan, and the Starlight international peering point in Chicago.

The demonstration and the developments leading up to it were made possible through the strong support of the U.S. Department of Energy Office of Science and the National Science Foundation, in cooperation with the funding agencies of the international partners.

Further information about the demonstration may be found at: http://ultralight.caltech.edu/web-site/sc05 http://www-iepm.slac.stanford.edu/monitoring/bulk/sc2005/hiperf.html http://supercomputing.fnal.gov/ http://monalisa.caltech.edu:8080/Slides/SC2005BWC/SC2005_BWCTalk11705.ppt and http://scinet.supercomp.org/2005/bwc/results/summary.html

About Caltech: With an outstanding faculty, including five Nobel laureates, and such off-campus facilities as the Jet Propulsion Laboratory, Palomar Observatory, and the W. M. Keck Observatory, the California Institute of Technology is one of the world's major research centers. The Institute also conducts instruction in science and engineering for a student body of approximately 900 undergraduates and 1,000 graduate students who maintain a high level of scholarship and intellectual achievement. Caltech's 124-acre campus is situated in Pasadena, California, a city of 135,000 at the foot of the San Gabriel Mountains, approximately 30 miles inland from the Pacific Ocean and 10 miles northeast of the Los Angeles Civic Center. Caltech is an independent, privately supported university, and is not affiliated with either the University of California system or the California State Polytechnic universities. http://www.caltech.edu

About SLAC: The Stanford Linear Accelerator Center (SLAC) is one of the world's leading research laboratories. Its mission is to design, construct, and operate state-of-the-art electron accelerators and related experimental facilities for use in high-energy physics and synchrotron radiation research. In the course of doing so, it has established the largest known database in the world, which grows at 1 terabyte per day. That, and its central role in the world of high-energy physics collaboration, places SLAC at the forefront of the international drive to optimize the worldwide, high-speed transfer of bulk data. http://www.slac.stanford.edu/

About CACR: Caltech's Center for Advanced Computing Research (CACR) performs research and development on leading edge networking and computing systems, and methods for computational science and engineering. Some current efforts at CACR include the National Virtual Observatory, ASC Center for Simulation of Dynamic Response of Materials, Particle Physics Data Grid, GriPhyN, Computational Infrastructure for Geophysics, Cascade High Productivity Computing System, and the TeraGrid. http://www.cacr.caltech.edu/

About Netlab: Netlab is the Networking Laboratory at Caltech led by Steven Low, where FAST TCP has been developed. The group does research in the control and optimization of protocols and networks, and designs, analyzes, implements, and experiments with new algorithms and systems. http://netlab.caltech.edu/FAST/

About the University of Michigan: The University of Michigan, with its size, complexity, and academic strength, the breadth of its scholarly resources and the quality of its faculty and students, is one of America's great public universities and one of the world's premiere research institutions. The university was founded in 1817 and has a total enrollment of 54,300 on all campuses. The main campus is in Ann Arbor, Michigan, and has 39,533 students (fall 2004). With over 600 degree programs and $739M in FY05 research funding, the university is one of the leaders in innovation and research. For more information, see http://www.umich.edu.

About the University of Florida: The University of Florida (UF), located in Gainesville, is a major public, comprehensive, land-grant, research university. The state's oldest, largest, and most comprehensive university, UF is among the nation's most academically diverse public universities. It has a long history of established programs in international education, research, and service and has a student population of approximately 49,000. UF is the lead institution for the GriPhyN and iVDGL projects and is a Tier-2 facility for the CMS experiment. For more information, see http://www.ufl.edu.

About Fermilab: Fermi National Accelerator Laboratory (Fermilab) is a national laboratory funded by the Office of Science of the U.S. Department of Energy, operated by Universities Research Association, Inc. Experiments at Fermilab's Tevatron, the world's highest-energy particle accelerator, generate petabyte-scale data per year, and involve large, international collaborations with requirements for high-volume data movement to their home institutions. The laboratory actively works to remain on the leading edge of advanced wide-area network technology in support of its collaborations.

About CERN: CERN, the European Organization for Nuclear Research, has its headquarters in Geneva. At present, its member states are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland, and the United Kingdom. Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission, and UNESCO have observer status. For more information, see http://www.cern.ch.

About StarLight: StarLight is an advanced optical infrastructure and proving ground for network services optimized for high-performance applications. Operational since summer 2001, StarLight is a 1 GE and 10 GE switch/router facility for high-performance access to participating networks, and also offers true optical switching for wavelengths. StarLight is being developed by the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC), the International Center for Advanced Internet Research (iCAIR) at Northwestern University, and the Mathematics and Computer Science Division at Argonne National Laboratory, in partnership with Canada's CANARIE and the Netherlands' SURFnet. STAR TAP and StarLight are made possible by major funding from the U.S. National Science Foundation to UIC. StarLight is a service mark of the Board of Trustees of the University of Illinois. See www.startap.net/starlight.

About the University of Manchester: The University of Manchester has been created by combining the strengths of UMIST (founded in 1824) and the Victoria University of Manchester (founded in 1851) to form the largest single-site university in the UK, with 34,000 students. On Friday, October 22, 2004, it received its Royal Charter from Queen Elizabeth II, with an unprecedented £300M capital investment program. Twenty-three Nobel Prize winners have studied at Manchester, continuing a proud tradition of innovation and excellence. Rutherford conducted the research that led to the splitting of the atom there, and the world's first stored-program electronic digital computer successfully executed its first program there in June 1948. The Schools of Physics, Computational Science, Computer Science and the Network Group, together with the E-Science North West Centre research facility, are very active in developing a wide range of e-science projects and Grid technologies. See www.manchester.ac.uk.

About UERJ (Rio de Janeiro): Founded in 1950, the Rio de Janeiro State University (UERJ; http//www.uerj.br) ranks among the ten largest universities in Brazil, with more than 23,000 students. UERJ's five campuses are home to 22 libraries, 412 classrooms, 50 lecture halls and auditoriums, and 205 laboratories. UERJ is responsible for important public welfare and health projects through its centers of medical excellence, the Pedro Ernesto University Hospital (HUPE) and the Piquet Carneiro Day-care Policlinic Centre, and it is committed to the preservation of the environment. The UERJ High Energy Physics group includes 15 faculty, postdoctoral, and visiting PhD physicists, and 12 PhD and master's students, working on experiments at Fermilab (D0) and CERN (CMS). The group has constructed a Tier2 center to enable it to take part in the Grid-based data analysis planned for the LHC, and has originated the concept of a Brazilian "HEP Grid," working in cooperation with USP and several other universities in Rio and São Paulo.

About UNESP (São Paulo): Created in 1976 with the administrative union of several isolated institutes of higher education in the state of Saõ Paulo, the São Paulo State University, UNESP, has campuses in 24 different cities in the State of São Paulo. The university has 25,000 undergraduate students and almost 10,000 graduate students. Since 1999 the university has had a group participating in the DZero Collaboration of Fermilab, which is operating the São Paulo Regional Analysis Center (SPRACE). See http://www.unesp.br.

About USP (São Paulo): The University of São Paulo, USP, is the largest institution of higher education and research in Brazil, and the third largest in Latin America. The university has most of its 35 units located on the campus in the state capital. It has around 40,000 undergraduate students and around 25,000 graduate students. It is responsible for almost 25 percent of all Brazilian papers and publications indexed on the Institute for Scientific Information (ISI). The SPRACE cluster is located at the Physics Institute. See http://www.usp.br.

About Kyungpook National University (Daegu): Kyungpook National University is one of the leading universities in Korea, especially in physics and information science. The university has 13 colleges and nine graduate schools with 24,000 students. It houses the Center for High-Energy Physics (CHEP) in which most Korean high-energy physicists participate. CHEP (chep.knu.ac.kr) was approved as one of the designated Excellent Research Centers supported by the Korean Ministry of Science.

About Vanderbilt: One of America's top 20 universities, Vanderbilt University is a private research university of 6,319 undergraduates and 4,566 graduate and professional students. The university comprises 10 schools, a public policy institute, a distinguished medical center, and the Freedom Forum First Amendment Center. Located a mile and a half southwest of downtown Nashville, the campus is in a park-like setting. Buildings on the original campus date to its founding in 1873, and the Peabody section of campus has been registered as a National Historic Landmark since 1966. Vanderbilt ranks 24th in the value of federal research grants awarded to faculty members, according to the National Science Foundation.

About the Particle Physics Data Grid (PPDG): The Particle Physics Data Grid; see www.ppdg.net) is developing and deploying production Grid systems that vertically integrate experiment-specific applications, Grid technologies, Grid and facility computation, and storage resources to form effective end-to-end capabilities. PPDG is a collaboration of computer scientists with a strong record in Grid technology and physicists with leading roles in the software and network infrastructures for major high-energy and nuclear experiments. PPDG's goals and plans are guided by the immediate and medium-term needs of the physics experiments and by the research and development agenda of the computer science groups.

About GriPhyN and iVDGL: GriPhyN (www.griphyn.org) and iVDGL (www.ivdgl.org) are developing and deploying Grid infrastructure for several frontier experiments in physics and astronomy. These experiments together will utilize petaflops of CPU power and generate hundreds of petabytes of data that must be archived, processed, and analyzed by thousands of researchers at laboratories, universities, and small colleges and institutes spread around the world. The scale and complexity of this "petascale" science drive GriPhyN's research program to develop Grid-based architectures, using "virtual data" as a unifying concept. IVDGL is deploying a Grid laboratory where these technologies can be tested at large scale and where advanced technologies can be implemented for extended studies by a variety of disciplines.

About CHEPREO: Florida International University (FIU), in collaboration with partners at Florida State University, the University of Florida, and the California Institute of Technology, has been awarded an NSF grant to create and operate an interregional Grid-enabled Center for High-Energy Physics Research and Educational Outreach (CHEPREO; www.chepreo.org) at FIU. CHEPREO encompasses an integrated program of collaborative physics research on CMS, network infrastructure development, and educational outreach at one of the largest minority universities in the US. The center is funded by four NSF directorates: Mathematical and Physical Sciences, Scientific Computing Infrastructure, Elementary, Secondary and Informal Education, and International Programs.

About the Open Science Grid: The OSG makes innovative science possible by bringing multidisciplinary collaborations together with the latest advances in distributed computing technologies. This shared cyberinfrastructure, built by research groups from U.S. universities and national laboratories, receives support from the National Science Foundation and the U.S. Department of Energy's Office of Science. For more information about the OSG, visit www.opensciencegrid.org.

About Internet2®: Led by more than 200 U.S. universities working with industry and government, Internet2 develops and deploys advanced network applications and technologies for research and higher education, accelerating the creation of tomorrow's Internet. Internet2 recreates the partnerships among academia, industry, and government that helped foster today's Internet in its infancy. For more information, visit: www.internet2.edu.

About the Abilene Network: Abilene, developed in partnership with Qwest Communications, Juniper Networks, Nortel Networks and Indiana University, provides nationwide high-performance networking capabilities for more than 225 universities and research facilities in all 50 states, the District of Columbia, and Puerto Rico. For more information on Abilene, see http://abilene.internet2.edu/.

About the TeraGrid: The TeraGrid, funded by the National Science Foundation, is a multiyear effort to build a distributed national cyberinfrastructure. TeraGrid entered full production mode in October 2004, providing a coordinated set of services for the nation's science and engineering community. TeraGrid's unified user support infrastructure and software environment allow users to access storage and information resources as well as over a dozen major computing systems at nine partner sites via a single allocation, either as stand-alone resources or as components of a distributed application using Grid software capabilities. Over 40 teraflops of computing power, 1.5 petabytes of online storage, and multiple visualization, data collection, and instrument resources are integrated at the nine TeraGrid partner sites. Coordinated by the University of Chicago and Argonne National Laboratory, the TeraGrid partners include the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UIUC), San Diego Supercomputer Center (SDSC) at the University of California, San Diego (UCSD), the Center for Advanced Computing Research (CACR) at the California Institute of Technology (Caltech), the Pittsburgh Supercomputing Center (PSC), Oak Ridge National Laboratory, Indiana University, Purdue University, and the Texas Advanced Computing Center (TACC) at the University of Texas-Austin.

About National LambdaRail: National LambdaRail (NLR) is a major initiative of U.S. research universities and private sector technology companies to provide a national-scale infrastructure for research and experimentation in networking technologies and applications. NLR puts the control, the power, and the promise of experimental network infrastructure in the hands of the nation's scientists and researchers. Visit http://www.nlr.net for more information.

About CENIC: CENIC (www.cenic.org) is a not-for-profit corporation serving the California Institute of Technology, California State University, Stanford University, University of California, University of Southern California, California Community Colleges, and the statewide K-12 school system. CENIC's mission is to facilitate and coordinate the development, deployment, and operation of a set of robust multi-tiered advanced network services for this research and education community.

About ESnet: The Energy Sciences Network (ESnet; www.es.net) is a high-speed network serving thousands of Department of Energy scientists and collaborators worldwide. A pioneer in providing high-bandwidth, reliable connections, ESnet enables researchers at national laboratories, universities, and other institutions to communicate with each other using the collaborative capabilities needed to address some of the world's most important scientific challenges. Managed and operated by the ESnet staff at Lawrence Berkeley National Laboratory, ESnet provides direct high-bandwidth connections to all major DOE sites, multiple cross connections with Internet2/Abilene, connections to Europe via GEANT and to Japan via SuperSINET, and fast interconnections to more than 100 other networks. Funded principally by DOE's Office of Science, ESnet services allow scientists to make effective use of unique DOE research facilities and computing resources, independent of time and geographic location.

About Qwest: Qwest Communications International Inc. (NYSE: Q) is a leading provider of voice, video, and data services. With more than 40,000 employees, Qwest is committed to the "Spirit of Service" and to providing world-class services that exceed customers' expectations for quality, value, and reliability. For more information, please visit the Qwest Web site at www.qwest.com.

About UKLight: The UKLight facility (www.uklight.ac.uk) was set up in 2003 with a grant of £6.5M from HEFCE (the Higher Education Funding Council for England) to provide an international experimental testbed for optical networking and support projects working on developments towards optical networks and the applications that will use them. UKLight will bring together leading-edge applications, Internet engineering for the future, and optical communications engineering, and enable UK researchers to join the growing international consortium that currently spans Europe and North America. A "Point of Access" (PoA) in London provides international connectivity with 10 Gbit network connections to peer facilities in Chicago (StarLight) and Amsterdam (NetherLight). UK research groups gain access to the facility via extensions to the 10Gbit SuperJANET development network, and a national dark fiber facility is under development for use by the photonics research community. Management of the UKLight facility is being undertaken by UKERNA on behalf of the Joint Information Systems Committee (JISC).

About AMPATH: Florida International University's Center for Internet Augmented Research and Assessment (CIARA) has developed an international, high-performance research connection point in Miami, Florida, called AMPATH (AMericasPATH; www.ampath.fiu.edu). AMPATH's goal is to enable wide-bandwidth digital communications between U.S. and international research and education networks, as well as between a variety of U.S. research programs in the region. AMPATH in Miami acts as a major international exchange point (IXP) for the research and education networks in South America, Central America, Mexico, and the Caribbean. The AMPATH IXP is home for the WHREN-LILA high-performance network link connecting Latin America to the U.S., funded by the NSF (award #0441095) and the Academic Network of São Paulo (award #2003/13708-0).

About the Academic Network of São Paulo (ANSP): ANSP unites São Paulo's University networks with Scientific and Technological Research Centers in São Paulo, and is managed by the State of São Paulo Research Foundation (FAPESP). The ANSP Network is another example of international collaboration and exploration. Through its connection to WHREN-LILA, all of the institutions connected to ANSP will be involved in research with U.S. universities and research centers, offering significant contributions and the potential to develop new applications and services. This connectivity with WHREN-LILA and ANSP will allow researchers to enhance the quality of current data, inevitably increasing the quality of new scientific development. See http://www.ansp.br.

About RNP: RNP, the National Education and Research Network of Brazil, is a not-for-profit company that promotes the innovative use of advanced networking with the joint support of the Ministry of Science and Technology and the Ministry of Education. In the early 1990s, RNP was responsible for the introduction and adoption of Internet technology in Brazil. Today, RNP operates a nationally deployed multigigabit network used for collaboration and communication in research and education throughout the country, reaching all 26 states and the Federal District, and provides both commodity and advanced research Internet connectivity to more than 300 universities, research centers, and technical schools. See http://www.rnp.br.

About KISTI: KISTI (Korea Institute of Science and Technology Information), which was assigned to play the pivotal role in establishing the national science and technology knowledge information infrastructure, was founded through the merger of the Korea Institute of Industry and Technology Information (KINITI) and the Korea Research and Development Information Center (KORDIC) in January, 2001. KISTI is under the supervision of the Office of the Prime Minister and will play a leading role in building the nationwide infrastructure for knowledge and information by linking the high-performance research network with its supercomputers.

About Hewlett Packard: HP is a technology solutions provider to consumers, businesses, and institutions globally. The company's offerings span IT infrastructure, global services, business and home computing, and imaging, and printing. More information about HP (NYSE, Nasdaq: HPQ) is available at www.hp.com.

About Sun Microsystems: Since its inception in 1982, a singular vision-"The Network Is The Computer(TM)"-has propelled Sun Microsystems, Inc. (Nasdaq: SUNW) to its position as a leading provider of industrial-strength hardware, software, and services that make the Net work. Sun can be found in more than 100 countries and on the World Wide Web at http://sun.com.

About IBM: IBM is the world's largest information technology company, with 80 years of leadership in helping businesses innovate. Drawing on resources from across IBM and key business partners, IBM offers a wide range of services, solutions, and technologies that enable customers, large and small, to take full advantage of the new era of e-business. For more information about IBM, visit www.ibm.com.

About Boston Limited: With over 12 years of experience, Boston Limited (www.boston.co.uk) is a UK-based specialist in high-end workstation, server, and storage hardware. Boston's solutions bring the latest innovations to market, such as PCI-Express, DDR II, and Infiniband technologies. As the pan-European distributor for Supermicro, Boston Limited works very closely with key manufacturing partners, as well as strategic clients within the academic and commercial sectors, to provide cost-effective solutions with exceptional performance.

About Neterion, Inc.: Founded in 2001, Neterion Inc. has locations in Cupertino, California, and Ottawa, Canada. Neterion delivers 10 Gigabit Ethernet hardware and software solutions that solve customers' high-end networking problems. The Xframe(r) line of products is based on Neterion-developed technologies that deliver new levels of performance, availability and reliability in the datacenter. Xframe, Xframe II, and Xframe E include full IPv4 and IPv6 support and comprehensive stateless offloads that preserve the integrity of current TCP/IP implementations without "breaking the stack." Xframe drivers are available for all major operating systems, including Microsoft Windows, Linux, Hewlett-Packard's HP-UX, IBM's AIX, Sun's Solaris and SGI's Irix. Neterion has raised over $42M in funding, with its latest C round taking place in June 2004. Formerly known as S2io, the company changed its name to Neterion in January 2005. Further information on the company can be found at http://www.neterion.com/

About Chelsio Communications: Chelsio Communications is leading the convergence of networking, storage, and clustering interconnects with its robust, high-performance, and proven protocol acceleration technology. Featuring a highly scalable and programmable architecture, Chelsio is shipping 10-Gigabit Ethernet adapter cards with protocol offload, delivering the low latency and superior throughput required for high-performance computing applications. For more information, visit the company online at www.chelsio.com.

About the National Science Foundation: The NSF is an independent federal agency created by Congress in 1950 "to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense...." With an annual budget of about $5.5 billion, it is the funding source for approximately 20 percent of all federally supported basic research conducted by America's colleges and universities. In many fields such as mathematics, computer science, and the social sciences, NSF is the major source of federal backing.

About the DOE Office of Science: DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the nation, and ensures U.S. world leadership across a broad range of scientific disciplines. The Office of Science also manages 10 world-class national laboratories with unmatched capabilities for solving complex interdisciplinary problems, and it builds and operates some of the nation's most advanced R&D user facilities, located at national laboratories and universities. These facilities are used by more than 19,000 researchers from universities, other government agencies, and private industry each year.

 

 

 

Writer: 
Robert Tindol
Writer: 
Exclude from News Hub: 
No

Caltech Researchers Achieve First Electrowetting of Carbon Nanotubes

PASADENA, Calif.—If you can imagine the straw in your soda can being a million times smaller and made of carbon, you pretty much have a mental picture of a carbon nanotube. Scientists have been making them at will for years, but have never gotten the nanotubes to suck up liquid metal to form tiny wires. In fact, conventional wisdom and hundreds of refereed papers say that such is not even possible.

Now, with the aid of an 1875 study of mercury's electrical properties, researchers from the California Institute of Technology have succeeded in forcing liquid mercury into carbon nanotubes. Their technique could have important applications, including nanolithography, the production of nanowires with unique quantum properties, nano-sized plumbing for the transport of extremely small fluid quantities, and electronic circuitry many times smaller than the smallest in existence today.

Reporting in the December 2 issue of the journal Science, Caltech assistant professor of chemistry Patrick Collier and associate professor of chemical engineering Konstantinos Giapis describe their success in electrowetting carbon nanotubes. By "electrowetting" they mean that the voltage applied to a nanotube immersed in mercury causes the liquid metal to rise into the nanotube by capillary action and cling to the surface of its inner wall.

Besides its potential for fundamental research and commercial applications, Giapis says that the result is an opportunity to set the record straight. "We have found that when measuring the properties of carbon nanotubes in contact with liquid metals, researchers need to take into account that the application of a voltage can result in electrically activated wetting of the nanotube.

"Ever since carbon nanotubes were discovered in 1991, people have envisioned using them as molds to make nanowires or as nanochannels for flowing liquids. The hope was to have the nanotubes act like molecular straws," says Giapis.

However, researchers never got liquid metal to flow into the straws, and eventually dismissed the possibility that metal could even do so because of surface tension. Mercury was considered totally unpromising because, as anyone knows who has played with liquid mercury in chemistry class, a glob will roll around a desktop without wetting anything it touches.

"The consensus was that the surface tension of metals was just too high to wet the walls of the nanotubes," adds Collier, the co-lead author of the paper. This is not to say that researchers have never been able to force anything into a nanotube: in fact, they have, albeit by using more complex and less controllable ways that have always led to the formation of discontinuous wires.

Collier and Giapis enter the picture because they had been experimenting with coating nanotubes with an insulator in order to create tiny probes for future medical and industrial applications. In attaching nanotubes to gold-coated atomic force microscope tips to form nanoprobes, they discovered that the setup provided a novel way of making liquid mercury rise in the tubes by capillary action.

Casting far beyond the nanotube research papers of the last decade, the researchers found an 1875 study by Nobel Prize-winning physicist Gabriel Lippmann that described in detail how the surface tension of mercury is altered by the application of an electrical potential. Lippmann's 1875 paper provided the starting point for Collier and Giapis to begin their electrowetting experiments.

After mercury entered the nanotubes with the application of a voltage, the researchers further discovered that the mercury rapidly escaped from the nanotubes immediately after the voltage was turned off. "This effect made it very difficult to provide hard proof that electrowetting occurred," Collier said. In the end, persistence and hard work paid off as the results in the Science paper demonstrate.

Giapis and Collier think that they will be able to drive various other metals into the nanotubes by employing the process at higher temperature. They hope to be able to freeze the metal nanowires in the nanotubes so that they remain intact when the voltage is turned off.

"We can pump mercury at this point, but it's possible that you could also pump nonmetallic liquids," Giapis says. "So we now have a way of pumping fluids controllably that could lead to nanofluidic devices. We envision making nano-inkjet printers that will use metal ink to print text and circuitry with nanometer precision. These devices could be scaled up to operate in a massively parallel manner. "

The paper is titled "Electrowetting in Carbon Nanotubes." In addition to Collier and Giapis, the other authors are Jinyu Chen, a postdoctoral scholar in chemistry, and Aleksandr Kutana, a postdoctoral scholar in chemical engineering.

Writer: 
Robert Tindol
Writer: 
Exclude from News Hub: 
No

Deciphering the Mystery of Bee Flight

PASADENA, Calif.- One of the most elusive questions in science has finally been answered: How do bees fly?

Although the issue is not as profound as how the universe began or what kick-started life on earth, the physics of bee flight has perplexed scientists for more than 70 years. In 1934, in fact, French entomologist August Magnan and his assistant André Sainte-Lague calculated that bee flight was aerodynamically impossible. The haphazard flapping of their wings simply shouldn't keep the hefty bugs aloft.

And yet, bees most certainly fly, and the dichotomy between prediction and reality has been used for decades to needle scientists and engineers about their inability to explain complex biological processes.

Now, Michael H. Dickinson, the Esther M. and Abe M. Zarem Professor of Bioengineering, and his postdoctoral student Douglas L. Altshuler and their colleagues at Caltech and the University of Nevada at Las Vegas, have figured out honeybee flight using a combination of high-speed digital photography, to snap freeze-frame images of bees in motion, and a giant robotic mock-up of a bee wing. The results of their analysis appear in the November 28 issue of the Proceedings of the National Academy of Sciences.

"We're no longer allowed to use this story about not understanding bee flight as an example of where science has failed, because it is just not true," Dickinson says.

The secret of honeybee flight, the researchers say, is the unconventional combination of short, choppy wing strokes, a rapid rotation of the wing as it flops over and reverses direction, and a very fast wing-beat frequency.

"These animals are exploiting some of the most exotic flight mechanisms that are available to insects," says Dickinson.

Their furious flapping speed is surprising, Dickinson says, because "generally the smaller the insect the faster it flaps. This is because aerodynamic performance decreases with size, and so to compensate small animals have to flap their wings faster. Mosquitoes flap at a frequency of over 400 beats per second. Birds are more of a whump, because they beat their wings so slowly."

Being relatively large insects, bees would be expected to beat their wings rather slowly, and to sweep them across the same wide arc as other flying bugs (whose wings cover nearly half a circle). They do neither. Their wings beat over a short arc of about 90 degrees, but ridiculously fast, at around 230 beats per second. Fruit flies, in comparison, are 80 times smaller than honeybees, but flap their wings only 200 times a second.

When bees want to generate more power--for example, when they are carting around a load of nectar or pollen--they increase the arc of their wing strokes, but keep flapping at the same rate. That is also odd, Dickinson says, because "it would be much more aerodynamically efficient if they regulated not how far they flap their wings but how fast "

Honeybees' peculiar strategy may have to do with the design of their flight muscles.

"Bees have evolved flight muscles that are physiologically very different from those of other insects. One consequence is that the wings have to operate fast and at a constant frequency or the muscle doesn't generate enough power," Dickinson says.

"This is one of those cases where you can make a mistake by looking at an animal and assuming that it is perfectly adapted. An alternate hypothesis is that bee ancestors inherited this kind of muscle and now present-day bees must live with its peculiarities," Dickinson says.

How honeybees make the best of it may help engineers in the design of flying insect-sized robots: "You can't shrink a 747 wing down to this size and expect it to work, because the aerodynamics are different," he says. "But the way in which bee wings generate forces is directly applicable to these devices."

###

Contact: Kathy Svitil (626) 395-8022 ksvitil@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media

Writer: 
KS
Writer: 

Powerful New Supercomputer Analyzes Earthquakes

PASADENA, Calif.- One of the most powerful computer clusters in the academic world has been created at the California Institute of Technology in order to unlock the mysteries of earthquakes.

The Division of Geological and Planetary Sciences' new Geosciences Computational Facility will feature a 2,048-processor supercomputer, housed in the basement of the Seeley G. Mudd Building of Geophysics and Planetary Science on campus.

Computer hardware fills long rows of black racks in the facility, each contains about 35 compute nodes. Massive air conditioning units line an entire wall of the 20-by-80-foot room to re-circulate and chill the air. Miles of optical-fiber cables tie the processors together into a working cluster that went online in September.

The $5.8 million parallel computing project was made possible by gifts from Dell, Myricom, Intel, and the National Science Foundation.

According to Jeroen Tromp, McMillan Professor of Geophysics and director of the Institute's Seismology Lab, who spearheaded the project, "The other crucial ingredient was Caltech's investment in the infrastructure necessary to house the new machine," he says. Some 500 kilowatts of power and 90 tons of air conditioning are needed to operate and cool the hardware.

David Kewley, the project's systems administrator, explained that's enough kilowatts to power 350 average households.

Tromp's research group will share use of the cluster with other division professors and their research groups, while a job-scheduling system will make sure the facility runs at maximum possible capacity. Tromp, who came to Caltech in 2000 from Harvard, is known as one of the world's leading theoretical seismologists. Until now, he and his Institute colleagues have used a smaller version of the machine, popularly known as a Beowulf cluster. Helping revolutionize the field of earthquake study, Tromp has created 3-D simulations of seismic events. He and former Caltech postdoctoral scholar Dimitri Komatitsch designed a computer model that divides the earth into millions of elements. Each element can be divided into slices that represent the earth's geological features.

In simulations involving tens of millions of operations per second, the seismic waves are propagated from one slice to the next, as they speed up, slow down, and change direction according to the earth's characteristics. The model is analogous to a CAT scan of the earth, allowing scientists to track seismic wave paths. "Much like a medical doctor uses a CAT scan to make an image of the brain, seismologists use earthquake-generated waves to image the earth's interior," Tromp says, adding that the earthquake's location, origin time, and characteristics must also be determined.

Tromp will now be able to deliver better, more accurate models in less time. "We hope to use the new machine to do much more detailed mapping. In addition to improving the resolution of our images of the earth's interior, we will also quantitatively assess the devastating effects associated with earthquakes based upon numerical simulations of strong ground motion generated by hypothetical earthquakes."

"One novel way in which we are planning to use the new machine is for near real-time seismology," Tromp adds. "Every time an earthquake over magnitude 3.5 occurs anywhere in California we will routinely simulate the motions associated with the event. Scientific products that result from these simulations are 'synthetic' seismograms that can be compared to actual seismograms."

The "real" seismograms are recorded by the Southern California Seismic Network (SCSN), operated by the Seismo Lab in conjunction with the U.S. Geological Survey. Of interest to the general public, Tromp expects that the collaboration will produce synthetic ShakeMovies of recent quakes, and synthetic ShakeMaps which can be compared to real ShakeMaps derived from the data. "These products should be available within an hour after the earthquake," he says. The Seismology Lab Media Center will be renovated with a large video wall on which scientists can show the results of simulations and analysis.

The new generation of seismic knowledge may also help scientists, engineers, and others lessen the potentially catastrophic effects of earthquakes.

"Intel is proud to be a sponsor of this premier system for seismic research which will be used by researchers and scientists," said Les Karr, Intel Corporate Business Development Manager. "The project reflects Caltech's growing commitment, in both research and teaching, to a broadening range of problems in computational geoscience. It is also a reflection of the growing use of commercial, commodity computing systems to solve some of the world's toughest problems."

The Dell equipment consists of 1,024 dual Dell PowerEdge 1850 servers that were pre-assembled for easy implementation. Dell Services representatives came to campus to complete the installation.

"CITerra, as this new research tool is known on the TOP500 Supercomputer list, is a proud accomplishment both for Caltech and for Myricom," said Charles Seitz, founder and CEO of Myricom, and a former professor of computer science at Caltech. "The talented technical team of Myricom about half of whom are Caltech alumni/ae, are eager for people to know that the architecture, programming methods, and technology of cluster computing was pioneered at Caltech 20 years ago. Those of us at Myricom who have drawn so much inspiration from our Caltech years are delighted to give some of the results of our efforts back to Caltech."

About Myricom: Founded in 1994, Myricom, Inc. created Myrinet, the high-performance computing (HPC) interconnect technology used in thousands of computing clusters in more than 50 countries worldwide. With its next-generation Myri-10G solutions, Myricom is bridging the gap between the rigorous demands of traditional HPC applications and the growing need for affordable computing speed in mainstream enterprises. Privately held, Myricom achieved and has sustained profitability since 1995 with 42 consecutive profitable quarters through September 2005. Based in Arcadia, California, Myricom solutions are sold direct and through channels. Myrinet clusters are supplied by OEM computer companies including IBM, HP, Dell, and Sun, and by other leading cluster integrators worldwide.

About Intel: Intel, the world's largest chipmaker, is also a leading manufacturer of computer, networking, and communications products. Intel processors, platform architectures, interconnects, networking technology, software tools, and services power some of the fastest computers in the world at price points that have expanded high performance computing beyond the confines of elite supercomputer centers and into the broad community of customers in mainstream industries. Those industries span automotive, aerospace, electronics manufacturing, energy and oil and gas in addition to scientific, research and academic organizations.

About the National Science Foundation: The NSF is an independent federal agency created by Congress in 1950 "to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense..." With an annual budget of about $5.5 billion, it is the funding source for approximately 20 percent of all federally supported basic research conducted by America's colleges and universities. In many fields such as mathematics, computer science, and the social sciences, NSF is the major source of federal backing.

###

Contact: Jill Perry (626) 395-3226 jperry@caltech.edu Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media

Writer: 
JP
Writer: 

Modified Mice Test Alzheimer's Disease Drugs

PASADENA, Calif.- Alzheimer's disease is a progressive brain disorder that afflicts an estimated 4.5 million Americans and that is characterized by the presence of dense clumps of a small peptide called amyloid-beta in the spaces between neurons.

Developing therapeutic drugs to stop the formation of the lesions, called amyloid plaques, and to remove them from the brain has become the focus of intense research efforts by pharmaceutical companies. Unfortunately, methods to test the efficacy of the drugs are limited as is the access to test results given to outside researchers.

Now neuroscientist Joanna L. Jankowsky, a senior research fellow in the laboratory of Henry A. Lester, Bren Professor of Biology at the California Institute of Technology, in collaboration with David R. Borchelt at the University of Florida, Gainesville, and colleagues at Johns Hopkins School of Medicine, Mayo Clinic Jacksonville, and the National Cancer Institute, have created a strain of genetically engineered mice that offers an unprecedented opportunity to test these new drugs and provides striking insight into possible future treatment for the disease.

A paper about the mouse model was published November 15 in the international open-access medical journal PLoS Medicine (www.plosmedicine.org).

The amyloid-beta peptide is something of an enigma. It is known to be produced normally in the brain and to be churned out in excess in Alzheimer's disease. But researchers don't know exactly what purpose the molecule usually serves--or, indeed, what happens to dramatically raise its concentration in the Alzheimer's brain.

The peptide is created when a molecule called amyloid precursor protein (APP) is snipped in two places, at the front end by an enzyme called beta-APP cleaving enzyme, and at the back end by an enzyme called gamma-secretase. If either of those two cuts is blocked, the amyloid-beta protein won't be released--and plaque won't build up in the brain.

To prevent plaques from accumulating, drug companies have been experimenting with compounds that inhibit one or the other of the enzymes, thereby blocking the release of amyloid-beta. Jankowsky and her colleagues decided to test how well this approach to treating Alzheimer's disease will work. Because they lacked access to the drugs themselves, they instead engineered a laboratory mouse with two added genes that would mimic the effect of secretase inhibitor treatment. One gene triggered the continuous production of APP in the brain (and thus also the amyloid-beta peptide) leading to substantial plaque deposits in mice as young as six months old. The second gene served as an off-switch to amyloid-beta. The researchers were able to flip the switch at will by adding the antibiotic tetracycline into the mice's food--and when they did so, they also halted all plaque formation.

"The key point here is that we've completely arrested the progression of the pathology," says Jankowsky.

Plaque deposits that had already formed, however, weren't cleared out.

"We can stop the disease from getting worse in these mice, but we can't reverse it," says study co-author David Borchelt, Jankowsky's former postdoctoral research advisor at Johns Hopkins University. "Although it is possible that human brains repair damage better than mouse brains, the study suggests that it may be difficult to repair lesions once they have formed."

One implication of the research is that it suggests that treatment with drugs to stop plaque formation should begin as soon as possible after the disease is diagnosed. "It looks like early intervention would be the most effective way of treating disease," Jankowsky says.

"It was surprising to many people that the plaques didn't go away, but they are really very stable structures," says Jankowsky. It is also possible, some researchers believe, that the plaques themselves aren't damaging. Rather, they may be a sign of the overproduction of amyloid-beta and of the small, free-floating clumps of the peptide that actually cause cognitive problems. "The plaques may simply act as trash cans for what has already been produced," she says. If that is indeed the case, Jankowsky says, then "shutting down the production of amyloid-beta itself would be adequate to reverse cognitive decline."

On the other hand, removal of the plaques could improve cognitive function by allowing neurons that had previously been displaced by the protein deposits to reform and form new neural connections. That is why, the researchers say, an ideal therapy would be one that both prevented the overproduction of new amyloid-beta and cleared out existing deposits.

Drug companies are currently investigating treatment protocols for Alzheimer's disease in which antibodies against the amyloid-beta peptide are directly injected into the body. The antibodies latch onto the molecule and quickly clear it from the brain, along with any plaque deposits that have already formed. However, Jankowsky says, these drugs therapies may not be appropriate for long-term use because of possible side effects. One clinical trial of the antibodies had to be stopped because some patients developed a serious brain inflammation known as encephalitis.

"The upshot of this research is that a combination of approaches may be the best way to tackle Alzheimer's disease," Jankowsky says. "The idea would be to use immunotherapy to acutely reverse the damage, followed by chronic secretase inhibition to prevent it from ever recurring."

For a copy of the paper, go to http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1...

###

Contact: Dr. Joanna L. Jankowsky (626) 395-6884 jlj2@caltech.edu

Kathy Svitil (626) 395-8022 ksvitil@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media

Writer: 
KS
Writer: 

Researchers uncover new details about how signals are transmitted in the brain

PASADENA, Calif.—An international team of scientists has announced a new breakthrough in understanding the molecular details of how signals move around in the human brain. The work is basic research, but could help pharmacologists design new drugs for treating a host of neurological disorders, as well as drugs for reducing alcohol and nicotine craving.

Reporting in the November 11 issue of the journal Nature, researchers from the California Institute of Technology and the University of Cambridge explain how they have learned to force a protein known as the 5-HT3 receptor to change its function by chemically changing the shape of one of the amino acids from which it is built. Using a technique developed at Caltech known as "unnatural amino mutagenesis," the researchers altered a proline amino acid in the 5-HT3 protein in order to modulate the receptor's ion channel. This gave the researchers control of the "switch" that is involved in neuron signaling.

According to Dennis Dougherty, lead author of the paper and the Hoag Professor of Chemistry at Caltech, the new research solves a 50-year-old mystery of how a neuroreceptor is changed by a chemical signal. Scientists have long known that signaling in the brain is a chemical process, in which a chemical substance known as a neurotransmitter is released into the synapse of a nerve and binds to a neuroreceptor, which is a protein that is found in the surface membranes of neurons. The action of the neurotransmitter changes the neuroreceptor in such a way that a signal is transmitted, but the precise nature of the structural change was unknown until now.

"The key is that we've identified the switch that has to get thrown when the neuroreceptor sends a signal," Dougherty says. "This switch is a proline."

The 5-HT3 receptor is one of a group of molecular structures in the brain cells that are known as Cys-loop receptors, which are associated with Parkinson's disease, schizophrenia, and learning and attention deficit disorders, as well as alcoholism and nicotine addiction. For treatments of some of these conditions, pharmacologists already custom-design drugs that have a general effect on the Cys-loop receptors. But the hope is that better design at the molecular level will lead to much better treatments that address more precisely the underlying signaling problems.

Dougherty says the work required the collaboration of organic chemists, molecular biologists, electrophysiologists and computer modelers. His Caltech group worked closely with the research group of Caltech biologist Henry Lester, and with the group at Cambridge headed by Sarah Lummis, to establish how proline changes its structure to open an ion channel and launch a neuron signal.

"This is the most precise model of receptor signaling yet developed, and it provides valuable insights into the nature of neuroreceptors and the drugs that modulate them," Dougherty says.

"The promise for pharmacology is that precise control of the signaling could lead to new ways of dealing with receptors that are malfunctioning," says Lester, Caltech's Bren Professor of Biology. "The fundamental understanding of how this all works is of value to people who want to manipulate the signaling."

The 5-HT3 receptor is also involved in the enjoyment people derive from drinking alcohol. If the 5-HT3 receptors are blocked, then alcoholics no longer get as much pleasure from drinking. Therefore, better control of the signaling mechanism could lead to more potent drug interventions for alcoholics. The nicotine receptors are also related, so progress could also lead to better ways of reducing the craving for nicotine.

In addition to Dougherty, Lester, and Lummis, the other authors of the paper are Caltech graduate students Darren Beene (now graduated) and Lori Lee, and Cambridge researcher William Broadhurst.

The research is supported by the National Institute of Neurological Disorders and Stroke.

Writer: 
Robert Tindol
Writer: 

North Atlantic Corals Could Lead to Better Understanding of the Nature of Climate Change

PASADENA, Calif.—The deep-sea corals of the North Atlantic are now recognized as "archives" of Earth's climatic past. Not only are they sensitive to changes in the mineral content of the water during their 100-year lifetimes, but they can also be dated very accurately.

In a new paper appearing in Science Express, the online publication of the American Association for the Advancement of Science (AAAS), environmental scientists describe their recent advances in "reading" the climatic history of the planet by looking at the radiocarbon of deep-sea corals known as Desmophyllum dianthus.

According to lead author Laura Robinson, a postdoctoral scholar at the California Institute of Technology, the work shows in principle that coral analysis could help solve some outstanding puzzles about the climate. In particular, environmental scientists would like to know why Earth's temperature has been holding so steadily for the last 10,000 years or so, after having previously been so variable.

"These corals are a new archive of climate, just like ice cores and tree rings are archives of climate," says Robinson, who works in the Caltech lab of Jess Adkins, assistant professor of geochemistry and global environmental science, and also an author of the paper.

"One of the significant things about this study is the sheer number of corals we now have to work with," says Adkins, "We've now collected 3,700 corals in the North Atlantic, and have been able to study about 150 so far in detail. Of these, about 25 samples were used in the present study.

"To put this in perspective, I wrote my doctoral dissertation with two dozen corals available," Adkins adds.

The corals that are needed to tell Earth's climatic story are typically found at depths of a few hundred to thousands of meters. Scuba divers, by contrast, can go only about 50 to 75 meters below the surface. Besides, the water is bitter cold and the seas are choppy. And to add an additional complication, the corals can be hard to find.

The solution has been for the researchers to take out a submarine to harvest the coral. The star of the ventures so far has been the deep-submergence vehicle known as Alvin, which is famed for having discovered the Titanic some years back. In a 2003 expedition several hundred miles off the coast of New England, Alvin brought back the aforementioned 3,700 corals from the New England Seamounts.

The D. dianthus is especially useful because it lives a long time, can be dated very precisely through uranium dating, and also shows the variations in carbon-14 (or radiocarbon) due to changing ocean currents. The carbon-14 all originally came from the atmosphere and decays at a precisely known rate, whether it is found in the water itself or in the skeleton of a coral. The less carbon-14 found, the "older" the water. This means that the carbon-14 age of the coral would be "older" than the uranium age of the coral. The larger the age difference, the older the water that bathed the coral in the past.

In a perfectly tame and orderly environment, the deepest water would be the most depleted of carbon-14 because the waters at that depth would have allowed the element the most time to decay. A sampling of carbon-14 content at various depths, therefore, would allow a graph to be constructed, in which the maximum carbon-14 content would be found at the surface.

In the real world, however, the oceans circulate. As a result, an "older" mass of water can actually sit on top of a "younger" mass. What's more, the ways the ocean water circulate are tied to climatic variations. A more realistic graph plotting carbon-14 content against depth would thus be rather wavy, with steeper curves meaning a faster rate of new water flushing in, and flatter curves corresponding to relatively unperturbed water.

The researchers can get this information by cutting up the individual corals and measuring their carbon-14 content. During the animals' 100-year life spans, they take in minerals from the water and use the minerals to build their skeletons. The calcium carbonate fossil we see, then, is a skeleton of an animal that may have just died or may have lived thousands of years ago. But in any case, the skeleton is a 100-year record of how much carbon-14 was washing over the creature's body during its lifetime.

An individual coral can tell a story of the water it lived in because the amount of variation in different parts of the growing skeleton is an indication of the kind of water that was present. If a coral sample shows a big increase in carbon-14 about midway through life, then one can assume that a mass of younger water suddenly bathed the coral. On the other hand, if a huge decrease of carbon-14 is observed, then an older water mass must have suddenly moved in.

A coral with no change in the amount of carbon-14 observed in its skeleton means that things were pretty steady during its 100-year lifetime, but the story may be different for a coral at a different depth, or one that lived at a different time.

In sum, the corals tell how the waters were circulating, which in turn is profoundly linked to climatic change, Adkins explains.

"The last 10,000 years have been relatively warm and stable-perhaps because of the overturning of the deep ocean," he says. "The deep ocean has nearly all the carbon, nearly all the heat, and nearly all the mass of the climate system, so how these giant masses of water have sloshed back and forth is thought to be tied to the period of the glacial cycles."

Details of glaciation can be studied in other ways, but getting a history of water currents is a lot more tricky, Adkins adds. But if the ocean currents themselves are implicated in climatic change, then knowing precisely how the rules work would be a great advancement in the knowledge of our planet.

"These guys provide us with a powerful new way of looking into Earth's climate," he says. "They give us a new way to investigate how the rate of ocean overturning has changed in the past."

Robinson says that the current collection of corals all come from the North Atlantic. Future plans call for an expedition to the area southeast of the southern tip of South America to collect corals. The addition of the second collection would give a more comprehensive picture of the global history of ocean overturning, she says.

In addition to Robinson and Adkins, the other authors of the paper are Lloyd Keigwin of the Woods Hole Oceanographic Institute; John Southon of the University of California at Irvine; Diego Fernandez and Shin-Ling Wang of Caltech; and Dan Scheirer of the U.S. Geological Survey office at Menlo Park.

The Science Express article will be published in a future issue of the journal Science.

Writer: 
Robert Tindol
Writer: 

Geologists Uncover New Evidence About the Rise of Oxygen

PASADENA, Calif.—Scientists believe that oxygen first showed up in the atmosphere about 2.7 billion years ago. They think it was put there by a one-celled organism called "cyanobacteria," which had recently become the first living thing on Earth to make oxygen from water and sunlight.

The rock record provides a good bit of evidence that this is so. But one of these rocks has just gotten a great deal more slippery, so to speak.

In an article appearing in the Geological Society of America's journal Geology, investigators from the California Institute of Technology, the University of Tübingen in Germany, and the University of Alberta describe their new findings about the origin of the mineral deposits known as banded-iron formations, or "BIFs." A rather attractive mineral that is often cut and polished for paperweights and other decorative items, a BIF typically has alternating bands of iron oxide and silica. How the iron got into the BIFs to begin with is thought to be a key to knowing when molecular oxygen first was produced on Earth.

The researchers show that purple bacteria—primitive organisms that have thrived on Earth without producing oxygen since before cyanobacteria first evolved—could also have laid down the iron oxide deposits that make up BIFs. Further, the research shows that the newer cyanobacteria, which suddenly evolved the ability to make oxygen through photosynthesis, could have even been floating around when the purple bacteria were making the iron oxides in the BIFs.

"The question is what made the BIFs," says Dianne Newman, who is associate professor of geobiology and environmental science and engineering at Caltech and an investigator with the Howard Hughes Medical Institute. "BIFs are thought to record the history of the rise of oxygen on Earth, but this may not be true for all of them."

The classical view of how the BIFs were made is that cyanobacteria began putting oxygen in the atmosphere about 2.7 billion years ago. At the same time, hydrothermal sources beneath the ocean floors caused ferrous iron (that is, "nonrusted" iron) to rise in the water. This iron then reacted with the new oxygen in the atmosphere, which caused the iron to change into ferric iron. In other words, the iron literally "rusted" at the surface of the ocean waters, and then ultimately settled on the ocean floor as sediments of hematite (Fe2O3) and magnetite (Fe3O4).

The problem with this scenario was that scientists in Germany about 10 years ago discovered a way that the more ancient purple bacteria could oxidize iron without oxygen. Instead, these anaerobic bacteria could have used a photosynthetic process in which light and carbon dioxide are used to turn the ferrous iron into ferric iron, throwing the mechanism of BIF formation into question.

Newman's postdoctoral researcher Andreas Kappler (now an assistant professor at the University of Tübingen) expanded on this discovery by doing some lab experiments to measure the rate at which purple bacteria could form ferric iron under light conditions relevant for different depths within the ocean.

Kappler's results showed that iron could indeed have been oxidized by these bacteria, in amounts matching what would have been necessary to form one of the Precambrian iron deposits in Australia.

Another of the paper's Caltech authors, Claudia Pasquero, determined the thickness of the purple bacterial layer that would have been needed for complete iron oxidation. Her results showed that the thickness of the bacterial layer could have been on the order of 17 meters, below wave base, which compares favorably to what is seen today in stratified water bodies such as the Black Sea.

Also, the results show that, in principle, the purple bacteria could have oxidized all the iron seen in the BIFs, even if the cyanobacteria had been present in overlying waters.

However, Newman says that the rock record contains various other kinds of evidence that oxygen was indeed absent in the atmosphere earlier than 2.7 billion years ago. Therefore, the goal of better understanding the history of the rise of oxygen could come down to finding out if there are subtle differences between BIFs that could have been produced by cyanobacteria and/or purple bacteria. And to do this, it's best to look at the biology of the organisms.

"The hope is that we'll be able to find out whether some organic compound is absolutely necessary for anaerobic anoxygenic photosynthesis to occur," Newman says. "If we can know how they work in detail, then maybe we'll be fortunate enough to find one molecule really necessary."

A good candidate is an organic molecule with high geological preservation potential that would have existed in the purple bacteria three billion years ago and still exists today. If the Newman team could find such a molecule that is definitely involved in the changing of iron to iron oxide, and is not present in cyanobacteria, then some of the enigmas of oxygen on the ancient earth would be solved.

"The goals are to get at the types of biomolecules essential for different types of photosynthesis-hopefully, one that is preservable," Newman says.

"I guess one interesting thing from our findings is that you can get rust without oxygen, but this is also about the history of metabolic evolution, and the ability to use ancient rock to investigate the history of life."

Better understanding microbial metabolism could also be of use in NASA's ambitious goal of looking for life on other worlds. The question of which organisms made the BIFs on Earth, therefore, could be useful for astrobiologists who may someday find evidence in rock records elsewhere.

Writer: 
Robert Tindol
Writer: 

Cracks or Cryovolcanoes? Surface Geology Creates Clouds on Titan

PASADENA, Calif.-Like the little engine that could, geologic activity on the surface of Saturn's moon Titan-maybe outgassing cracks and perhaps icy cryovolcanoes-is belching puffs of methane gas into the atmosphere of the moon, creating clouds.

This is the conclusion of planetary astronomer Henry G. Roe, a postdoctoral researcher, and Michael E. Brown, professor of planetary astronomy at the California Institute of Technology. Roe, Brown, and their colleagues at Caltech and the Gemini Observatory in Hawaii based their analysis on new images of distinctive clouds that sporadically appear in the middle latitudes of the moon's southern hemisphere. The research will appear in the October 21 issue of the journal Science.

The clouds provide the first explanation for a long-standing Titan mystery: From where does the atmosphere's copious methane gas keep coming? That methane is continuously destroyed by the sun's ultraviolet rays, in a process called photolysis. This photolysis forms the thick blanket of haze enveloping the moon, and should have removed all of Titan's atmospheric methane billions of years ago.

Clearly, something is replenishing the gas-and that something, say Roe and his colleagues, is geologic activity on the surface. "This is the first strong evidence for currently active methane release from the surface," Roe says.

Adds Brown: "For a long time we've wondered why there is methane in the atmosphere of Titan at all, and the answer is that it spews out of the surface. And what is tremendously exciting is that we can see it, from Earth; we see these big clouds coming from above these methane vents, or methane volcanoes. Everyone had thought that must have been the answer, but until now, no one had found the spewing gun."

Roe, Brown, and their colleagues made the discovery using images obtained during the past two years by adaptive optics systems on the 10-meter telescope at the W. M. Keck Observatory on Mauna Kea in Hawaii and the neighboring 8-meter telescope at the Gemini North Observatory. Adaptive optics is a technique that removes the blurring of atmospheric turbulence, creating images as sharp as would be obtained from space-based telescopes.

"These results came about from a collaborative effort between two very large telescopes with adaptive optics capability, Gemini and Keck," says astronomer Chadwick A. Trujillo of the Gemini Observatory, a co-author of the paper. "At both telescopes, the science data were collected from only about a half an hour of images taken over many nights. Only this unusual 'quick look' scheduling could have produced these unique results. At most telescopes, the whole night is given to a single observer, which could not have produced this science."

The two telescopes observed Titan on 82 nights. On 15 nights, the images revealed distinctive bright clouds-two dozen in all-at midlatitudes in the southern hemisphere. The clouds usually popped up quickly, and generally had disappeared by the next day. "We have several observations where on one night, we don't see a cloud, the next night we do, and the following night it is gone," Roe says.

Some of the clouds stretched as much as 2,000 km across the 5,150 km diameter moon. "An equivalent cloud on Earth would cover from the east coast to the west coast of the United States," Roe says. Although the precise altitude of the clouds is not known, they fall somewhere between 10 km and 35 km above the surface, within Titan's troposphere (most cloud activity on the earth is also within its troposphere).

Notably, all of the clouds were located within a relatively narrow band at around 40 degrees south latitude, and most were clustered tightly near 350 degrees west longitude. Both their sporadic appearance and their specific geographic location led the researchers to conclude that the clouds were not arising from the regular convective overturn of the atmosphere due to its heating by the sun (which produces the cloud cover across the moon's southern pole) but, rather, that some process on the surface was creating the clouds.

"If these clouds were due only to the global wind pattern, what we call general circulation, there's no reason the clouds should be linked to a single longitude. They'd be found in a band around the entire moon," Roe says.

Another possible explanation for the clouds' patchy formation is variation in the albedo, or brightness, of the surface. Darker surfaces absorb more sunlight than lighter ones. The air above those warmer spots would be heated, then rise and form convective clouds, much like thunderstorms on a summer's day on Earth. Roe and his colleagues, however, found no differences in the brightness of the surface at 40 degrees south latitude. Clouds can also form over mountains when prevailing winds force air upward, but in that case the clouds should always appear in the identical locations. "We see the clouds regularly appear in the same geographic region, but not always in the exact same location," says Roe.

The other way to make a cloud on Titan is to raise the humidity by directly injecting methane into the atmosphere, and that, the scientists say, is the most likely explanation here.

Exactly how the methane is being injected is still unknown. It may seep out of transient cracks on the surface, or bubble out during the eruption of icy cryovolcanoes.

Although no such features have yet been observed on the moon, Roe and his colleagues believe they may be common. "We think there are numerous sources all over the surface, of varying size, but most below the size that we could see with our instruments," he says.

One large feature near 350 degrees west longitude is probably creating the clump of clouds that forms in that region, while also humidifying the band at 40 degrees latitude, Roe says, "so you end up creating areas where the humidity is elevated by injected methane, making it easier for another, smaller source to also generate clouds. They are like weather fronts that move through. So we are seeing weather, on another planet, with something other than water. With methane. That's cool. It's better than science fiction."

Images are available upon request. For advance copies of the embargoed paper, contact the AAAS Office of Public Programs, (202) 326-6440 or scipak@aaas.org. ###

Contact: Dr. Henry G. Roe (626) 395-8708 hroe@gps.caltech.edu Kathy Svitil Caltech Media Relations (626) 395-8022 ksvitil@caltech.edu

Visit the Caltech Media Relations Web site at: http://pr.caltech.edu/media

Writer: 
KS
Writer: 

Pages