Protoplanetary Disk Found Encircling Mira B

SEATTLE—Astronomers generally assume that the dusty disks where planets form are found around young stars in stellar nurseries. Now, for the first time, a protoplanetary disk has been found in the environment of a dying star.

A team of astronomers is reporting today at the winter meeting of the American Astronomical Society that material from the dying star Mira A is being captured into a disk around Mira B, its companion. Michael Ireland of the California Institute of Technology and his coauthors, John Monnier from the University of Michigan, Peter Tuthill from the University of Sydney, and Richard Cohen from the W. M. Keck Observatory, say that the finding implies that there should be many similar undiscovered systems in the solar neighborhood, providing a myriad of new places to look for young extrasolar planets.

Located 350 light-years away in the constellation of Cetus, Mira (christened the "miracle star") first shook the foundations of the astronomy world 400 years ago with its changing brightness. Visible to the naked eye for about one month at a time, it becomes 1,000 times fainter and disappears from view, only to reappear again on an 11-month cycle.

"When looking at one of the most celebrated and well-studied stars in the galaxy, I was amazed to find something new and unexpected," says Ireland. "The discovery not only changes the way we think about a star that's important historically, but also how we'll look at similar stars in the future."

Although Mira was once a star very similar to the sun, it is now in its death throes as it loses its dusty outer layers at a rate of one Earth-mass every seven years. If Mira were a single star, all this material would travel into outer space. However, like two out of every three star systems, Mira has a companion star that orbits around it, in this case with a period of about 1,000 years. This companion, Mira B, has a gravitational field that catches nearly one percent of the material lost from Mira A.

By using specialized high-contrast techniques at the 10-meter Keck I telescope in Hawaii and the 8-meter Gemini South telescope in Chile, Ireland's team discovered heat radiation coming not only from Mira B itself, but also from a location offset from Mira B by a distance equivalent to Saturn's orbit.

"Observing Mira in the infrared is like staring straight down the barrel of one of the brightest searchlights in the galaxy. It came as a real revelation to see this faint mote of dust, harboring all the possibilities of new worlds in formation, against the hostile environment of the Red Giant," says Tuthill.

Monnier agrees, saying "Our new imaging method at Keck is revealing new details that were thought to be impossible to detect due to the blurring by atmospheric turbulence. In this case, the 'detail' we discovered is potentially a whole new class of planetary system in formation."

The intense radiation from Mira A, 5,000 times brighter than the sun, heats the edge of the disk to about Earth's temperature and causes it to glow in the infrared. The researchers were able to show that the material was indeed the edge of a disk and not just a "clump" in the wind from Mira A. By modeling the way that this system captures the outflow from Mira A, the researchers were also able to confirm that Mira B is simply an ordinary star like the sun, although about half as massive.

The key part of this result is what will happen when Mira A finishes its death throes and becomes a white dwarf in about one million years. The disk-creating process will have finished and the disk itself will be capable of forming new planets.

"This discovery opens up a new way to search for young planets, by searching in double star systems that contain white dwarfs," Ireland says. "The expected abundance of these systems means that we can find planets that we know are young around stars like our sun."

Astronomers associate the death of a star with the death of its planetary system. Here, the opposite is happening. Ireland adds, "An aging star is laying the foundation for a new generation of planets."

Similar systems could be discovered and studied by future instruments such as the Thirty-Meter Telescope (TMT).

The work was supported by the Australian Research Council and the NASA Navigator Program.

Robert Tindol

New 3-D Map of Dark Matter Reveals Cosmic Scaffolding

SEATTLE—An international team of astronomers has created a comprehensive three-dimensional map that offers a first look at the weblike large-scale distribution of dark matter in the universe. Dark matter is an invisible form of matter that accounts for most of the universe's mass, but that so far has eluded direct detection, or even a definitive explanation for its makeup.

The map is being unveiled today at the 209th meeting of the American Astronomical Society, and the results are being published simultaneously online by the journal Nature.

According to Richard Massey, an astronomer at the California Institute of Technology who led in the map's creation, the map provides the best evidence yet that normal matter, largely in the form of galaxies, forms along the densest concentrations of dark matter. The map reveals a loose network of filaments that grew over time and which intersect in massive structures at the locations of clusters of galaxies.

Massey calls dark matter "the scaffolding inside of which stars and galaxies have been assembled over billions of years."

Because the formation of the galaxies depicted stretches halfway to the beginning of the universe, the research also shows how dark matter has grown increasingly clumpy as it continues collapsing under gravity. The new maps of dark matter and galaxies will provide critical observational underpinnings to future theories for how structure formed in the evolving universe under the relentless pull of gravity.

Mapping dark matter's distribution in space and time is fundamental to understanding how galaxies grew and clustered over billions of years, as predicted by cosmological models. Tracing the growth of clustering in the dark matter may eventually also shed light on dark energy, a repelling form of gravity that influences how dark matter clumps.

The map was derived from the Hubble Space Telescope's widest survey of the universe, called COSMOS (for Cosmic Evolution Survey), led by Nick Scoville, the Moseley Professor of Astronomy at Caltech. In making the COSMOS survey, the Hubble imaged 575 slightly overlapping views of the universe using the onboard Advanced Camera for Surveys (ACS). It took nearly 1,000 hours of observations and is the largest project ever conducted with the Hubble.

The three-dimensional map was developed by measuring the shapes of as many as half a million faraway galaxies. These shapes are distorted by the bending of light paths by concentrations of dark-matter mass in the foreground along the line of sight. Then, the observed subtle distortion of the galaxies' shape was used to reconstruct the distribution of intervening mass projected along the Hubble's line of sight.

Richard Ellis, Steele Professor of Astronomy at Caltech explains that the analysis utilized the remarkable phenomenon of gravitational lensing, first predicted by Einstein and now a major tool of cosmological research.

"The depth of the COSMOS image and the superior resolution of Hubble Space Telescope are the key ingredients enabling this detailed map," adds Ellis. "The COSMOS field also covers a wide enough area for the large-scale filamentary structure to be clearly evident."

"The unique advance in our work is that we have made a three-dimensional map," adds Jason Rhodes of JPL, a coauthor of the study. "Because the distances to the faint background galaxies are known in the COSMOS field, we can examine the distortion as a function of the background distance."

The results also show that several of the early universe's cosmic structures inside the dark matter "scaffolding" are clusters of galaxies in the process of assembly, says Scoville. These structures can be traced over more than 80 million light-years in the COSMOS survey-approximately five times the extent of the nearby Virgo galaxy cluster.

The researchers further found that galaxies in the densest early universe structures have older stellar populations, implying that these galaxies formed first and accumulated the greatest masses in a bottom-up assembly of galaxies. The COSMOS survey shows that galaxies with on-going star formation, even to the present epochs, dwell in less populated cosmic filaments and voids.

"Both the maturity of the stellar populations and the 'downsizing' of star formation in galaxies vary strongly with the epoch when the galaxies were born as well as their dark-matter environment." says Scoville. His team's paper is to appear in the Astrophysical Journal at a later date.

Extremely deep color images of the two-degree COSMOS field were obtained in 30 nights of observing with the 8.2-meter Subaru telescope in Hawaii. Thousands of galaxies' spectra were obtained by using the European Southern Observatory's Very Large Telescope and the Magellan telescope in Chile. The distances to the galaxies were accurately determined from their redshifts, which were derived from galaxy colors and spectra. The distribution of the normal matter was partly determined with the European Space Agency's XMM-Newton telescope, looking at the hot gases emitting X-rays in the densest clusters.

Robert Tindol

New Type of Black-Hole Explosion Has Astrophysicists Wondering About Its Origin

PASADENA, Calif.—Scientists are announcing this week their detection of a June 14 gamma-ray burst that probably signals a hitherto undetected type of cosmic explosion. The hybrid gamma-ray burst probably created a new black hole, but the details of how the explosion occurred are unclear.

In several companion articles appearing in the December 21 issue of the journal Nature, the researchers present observations of the burst leading them to suggest that the event was a new type of cosmic explosion.

"We're still trying to figure out precisely what caused this event to come about, but its very mystery shows how much we still have to learn about the universe," says Avishay Gal-Yam, an astronomer at the California Institute of Technology, who is lead author of the paper on the Hubble Space Telescope's observations of the event. "The detection certainly speaks well of NASA's commitment to putting up satellites that can study such cataclysmic events as this one in detail."

The burst was discovered by NASA's Swift satellite and has since been studied with over a dozen telescopes, including the Hubble Space Telescope and telescopes at various ground-based observatories.

As with other gamma-ray bursts, this hybrid burst is likely signaling the birth of a new black hole. It is unclear, however, what kind of object or objects exploded or merged to create the black hole or, perhaps, something even more bizarre. The hybrid burst exhibits properties of the two known classes of gamma-ray bursts yet possesses features that cannot be explained.

"We have lots of data on this, dedicated lots of observation time, and we just can't figure out what exploded," said Neil Gehrels of NASA Goddard Space Flight Center in Greenbelt, Maryland, lead author of another of the Nature reports. "All the data seem to point to a new, but perhaps not so uncommon, kind of cosmic explosion."

Gamma-ray bursts represent the most powerful known explosions in the universe. Yet they are random and fleeting, never to appear twice, and only in recent years has their nature been revealed.

Gamma-ray bursts fall into two categories, long and short. The long bursts are longer than two seconds and appear to be from the core collapse of massive stars forming a black hole. Most of these bursts come from the edge of the visible universe. The short bursts, under two seconds and often lasting just a few milliseconds, appear to be the merger of two neutron stars or a neutron star with a black hole, which subsequently create a new or bigger black hole.

The hybrid burst, called GRB 060614 after the date it was detected, was discovered in the constellation Indus. The burst lasted for 102 seconds, placing it soundly in long-burst territory. But the burst lacked the hallmark of a supernova, or star explosion, commonly seen shortly after long bursts. Also, the burst's host galaxy has a low star-formation rate with few massive stars that could produce supernovae and long gamma-ray bursts.

"This was close enough to detect a supernova if it existed," added Gal-Yam. "Even Hubble didn't see anything."

Certain properties of the burst concerning its brightness and the arrival time of photons of various energies, called the lag-luminosity relationship, suggest that burst behaved more like a short burst (from a merger) than a long burst. Yet no theoretical model of mergers can support a sustained release of gamma-ray energy for 102 seconds.

"This is brand new territory; we have no theories to guide us," said Gehrels.

Scientists remain divided on whether this was a long short burst from a merger or a long burst from a star explosion with no supernova for whatever reason. Most conclude, however, that some new process must be at play: either the model of mergers creating second-long bursts needs a major overhaul, or the progenitor star from an explosion is intrinsically different from the kind that make supernovae.

"While we don't yet know what GRB 060614 was, we have learned that the simple picture we had before, with long GRBs coming from supernova explosions and short GRBs from mergers of neutron stars, cannot be the whole story. This hybrid burst is telling us that we have at least one more mystery to solve," concludes Gal-Yam.

Robert Tindol

Physicists Set New Record for Network Data Transfer

TAMPA, Florida—An international team of physicists, computer scientists, and network engineers led by the California Institute of Technology, CERN, and the University of Michigan and partners at the University of Florida and Vanderbilt, as well as participants from Brazil (Rio de Janeiro State University, UERJ, and the State Universities of São Paulo, USP and UNESP) and Korea (Kyungpook National University, KISTI) joined forces to set new records for sustained data transfer between storage systems during the SuperComputing 2006 (SC06) Bandwidth Challenge (BWC).

The high-energy physics team's demonstration of "High Speed Data Gathering, Distribution and Analysis for Physics Discoveries at the Large Hadron Collider" achieved a peak throughput of 17.77 gigabits per second (Gbps) between clusters of servers at the show floor and at Caltech. Following the rules set for the SC06 Bandwidth Challenge, the team used a single 10-Gbps link provided by National Lambda Rail ( that carried data in both directions. Sustained throughput throughout the night prior to the bandwidth challenge exceeded 16 Gbps (or two gigabytes per second) using just 10 pairs of small servers sending data at nine Gbps to Caltech from Tampa, and eight pairs of servers sending seven Gbps of data in the reverse direction.

One of the key advances in this demonstration was Fast Data Transport (FDT;, a Java application developed by Iosif Legrand of Caltech that runs on all major platforms and uses the NIO libraries to achieve stable disk reads and writes coordinated with smooth data flow across the long-range network. FDT streams a large set of files across an open TCP socket, so that a large data set composed of thousands of files, as is typical in high-energy physics applications, can be sent or received at full speed, without the network transfer restarting between files. By combining FDT with FAST TCP, developed by Steven Low of Caltech's computer science department, together with an optimized Linux kernel provided by Shawn McKee of Michigan known as the "UltraLight kernel," the team reached unprecedented throughput levels, limited only by the speeds of the disks, that correspond to nine GBytes/sec reading from, or five Gbytes/sec writing to, a single rack of 40 low-cost servers.

Overall, this year's demonstration, following the team's record memory-to-memory transfer rate of 151 Gbps using 22 10-Gbps links last year at SuperComputing 2005, represents a major milestone in providing practical, widely deployable applications. These applications exploit advances in state-of-the-art TCP-based data transport, servers (Intel Woodcrest-based systems) and the Linux kernel over the last 12 months. FDT also represents a clear advance in basic data transport capability over wide-area networks compared to last year, in that 20 Gbps could be sustained in a few streams memory-to-memory over long distances very stably for many hours, using a single 10-Gigabit Ethernet link very close to full capacity in both directions.

The two largest physics collaborations at the LHC, CMS and ATLAS, each encompass more than 2,000 physicists and engineers from 170 universities and laboratories. In order to fully exploit the potential for scientific discoveries, the many Petabytes of data produced by the experiments will be processed, distributed, and analyzed using a global Grid. The key to discovery is the analysis phase, where individual physicists and small groups repeatedly access, and sometimes extract and transport, Terabyte-scale data samples on demand, in order to optimally select the rare "signals" of new physics from potentially overwhelming "backgrounds" from already-understood particle interactions. This data will amount to many tens of Petabytes in the early years of LHC operation, rising to the Exabyte range within the coming decade.

The high-energy physics team also carried out several other demonstrations, making good use of the ten wide-area network links connected to the Caltech/CERN booth: o Vanderbilt demonstrated the capabilities of LStore, an integrated system that provides a single file-system image across many storage "depots" consisting of compact data servers distributed across wide-area networks. Reading and writing between sets of servers at the Vanderbilt booth at SC06 and Caltech, the team achieved a throughput of more than one GByte/sec. o By using five of the 10 10-Gbps links coming into SC06, the team reached an aggregate throughput of more than 75 Gbps, combining disk-to-disk and memory-to-memory transfers. During this part of the demonstrations, the links between Tampa and Jacksonville, the National Lambda Rail Framenet links, and the newly commissioned Atlantic Wave link, were often loaded to full capacity at 10 Gbps in both directions, as shown on the SCInet network "weathermap." o Of particular note was the use of FDT between Tampa and Daegu in South Korea, allowing the group from Kyungpook National University and KISTI to achieve 8.6 Gbps disk-to-disk over a single network path, using NLR's shared Packetnet via Atlanta, and the GLORIAD link between Seattle and Daejeon that was inaugurated in September 2005, shortly before SC05.

Professor Harvey Newman of Caltech, head of the HEP team and US CMS Collaboration Board Chair, who originated the LHC Data Grid Hierarchy concept, said, "These demonstrations allowed us to thoroughly field-test a new class of data-transport applications, together with the real-time analysis of some of the data using `ROOTlets,' a distributed form of the ROOT system ( that is an essential element of high-energy physicists' arsenal of tools for large-scale data analysis.

"These demonstrations provided a new, more agile and flexible view of the globally distributed Grid system of more than 100 laboratory- and university-based computing facilities that is now being commissioned in the U.S., Europe, Asia, and Latin America in preparation for the next generation of high-energy physics experiments at CERN's Large Hadron Collider (LHC) that will begin operation in November 2007, along with several hundred computing clusters serving individual groups of physicists. By substantially reducing the difficulty of transporting Terabyte- and larger scale data sets among the sites, we are enabling physicists throughout the world to have a much greater role in the next round of physics discoveries expected soon after the LHC starts."

David Foster, head of Communications and Networking at CERN said, "The efficient use of high-speed networks to transfer large data sets is an essential component of CERN's LCG plans to deploy computing infrastructure that will enable the LHC experiments to carry out their scientific missions. This demonstration of the high-speed transfer of physics event samples and their analysis made use of equipment at Tampa, Caltech, CERN, and elsewhere, interconnected by the same network infrastructure CERN plans to use in production, and was an important milestone on the road to ensuring full capability when the LHC starts operations in 2007."

Iosif Legrand, senior software and distributed system engineer at Caltech and the technical coordinator for the MonALISA and FDT projects, said, "We demonstrated a realistic, worldwide deployment for distributed, data-intensive applications capable to effectively use and coordinate the network resources. A distributed agent-based system was used for dynamic discovery of resources and to monitor, configure, control, and orchestrate efficient data transfer between several hundreds of computers using hybrid networks."

Richard Cavanaugh of the University of Florida, technical coordinator of the UltraLight project that is developing the next generation of network-integrated grids aimed at LHC data analysis, said, "Future optical networks incorporating multiple 10-Gbps links are the foundation of the Grid system that will drive scientific discoveries at the LHC. A 'hybrid' network integrating both traditional switching and routing of packets and dynamically constructed optical paths to support the largest data flows is a central part of the near-term future vision that the scientific community has adopted to meet the challenges of data-intensive science in many fields. "By demonstrating that many 10-Gbps wavelengths can be used efficiently over continental and transoceanic distances (often in both directions simultaneously), the high-energy physics team showed that this vision of a worldwide dynamic Grid supporting many Terabyte and larger data transactions is practical."

Shawn McKee, associate research scientist in the University of Michigan department of physics and leader of the UltraLight network technical group, said, "This achievement is an impressive example of what a focused network effort can accomplish. It is an important step towards the goal of delivering a highly capable end-to-end network-aware system and architecture that meet the needs of next-generation e-Science."

Paul Sheldon of Vanderbilt University, who leads the NSF-funded Research and Education Data Depot Network (REDDnet) project that will deploy a distributed storage infrastructure of about 700TB over the next two years, commented on the innovative network storage technology that helped the group achieve such high performance in wide-area, disk-to-disk transfers.

"With IBP and the logistical network technology that Micah Beck and his group at Tennessee have developed, we were able to build middleware, L-Store, that can exploit a tremendous amount of parallelism, both in data transfers across the network and in reading and writing to disk," said Sheldon. "And since L-Store can also do efficient erasure coding in software with minimal data movement, we can build high-quality storage clusters out of commodity parts and push depot costs down to a thousand dollars a TB.

"When you combine this network-storage technology, including its cost profile, with the remarkable tools that Harvey Newman's networking team has produced, I think we are well positioned to address the incredible infrastructure demands that the LHC experiments are going to make on our community worldwide."

The team hopes this new demonstration will encourage scientists and engineers in many sectors of society to develop and plan to deploy a new generation of revolutionary Internet applications. Multigigabit/s end-to-end network performance will empower scientists to form "virtual organizations" on a planetary scale, sharing their collective computing and data resources in a flexible way. In particular, this is vital for projects on the frontiers of science and engineering, in "data-intensive" fields such as particle physics, astronomy, bioinformatics, global climate modeling, geosciences, fusion, and neutron science.

The new bandwidth record was achieved through extensive use of the SCInet network infrastructure at SC06. The team used all 10 of the 10-Gbps links coming into the showfloor, connected to two Cisco Systems Catalyst 6500 Series Switches at the Caltech/CERN booth, together with computing clusters provided by Hewlett Packard and a large number of 10-gigabit Ethernet server interfaces provided by Neterion and Myricom.

The 10 10-Gbps network connections included two National Lambda Rail FrameNet links, one to Los Angeles (the official BWC wavelength) and one to StarLight two NLR PacketNet links used from Korea over GLORIAD and from the University of Michigan over MiLR; two links provided by Internet2's Abilene network used to carry traffic from Caltech and UMICH; one link from ESNET used from Brookhaven National Laboratory; and one link from FLRNET used to carry traffic from Brazil over the CHEPREO/WREN-LILA link. Also, one link provided by AtlanticWave from NYC to DC and Miami was used to carry traffic from CERN coming over the USLHCNet NYC-Geneva circuit and one link provided by UltraLight/FLR from Jacksonville.

The UltraLight/FLR circuit was the only direct WAN circuit available at SC06 and terminated directly on the Caltech equipment on the showfloor. All other circuits were connected through the SCInet infrastructure. During the test, several of the network links were shown to operate at full capacity for sustained periods. The network has been deployed through exceptional support by Cisco Systems and Nortel, as well as the network engineering staffs of National LambdaRail, Florida Lambda Rail, Internet2, ESnet, TeraGrid, CENIC, MiLR, Atlantic Wave, AMPATH, RNP and ANSP/FAPESP in Brazil, KISTI in Korea, the Starlight international peering point in Chicago, and MANLAN in New York.

As part of the SC06 demonstration, a distributed analysis of simulated LHC physics data was carried using the Grid-enabled Analysis Environment (GAE) developed at Caltech for the LHC. This demonstration involved the use of the Clarens Web Services portal developed at Caltech, the use of Root-based analysis software, and numerous architectural components developed in the framework of Caltech's "Grid Analysis Environment." The analysis made use of a new component in the Grid system: "Rootlets" hosted by Clarens servers. Each Rootlet is a full instantiation of CERN's Root tool, created on demand by the distributed clients in the Grid.

The design and deployment of the Rootlets/Clarens system was carried out under the auspices of an STTR grant for collaboration between Deep Web Technologies ( of New Mexico, Caltech, and Indiana University. In addition to the Rootlets/Clarens demonstration, an innovative literature and database aggregation search tool designed specifically for scientists working in the field of particle physics and developed by Deep Web Technologies was shown. This aggregation tool allowed simultaneous queries to be made on several of the most popular document databases, the results being aggregated and presented to the user in a homogeneous fashion. Deep Web's aggregation system also powers the website.

The team used Caltech's MonALISA (MONitoring Agents using a Large Integrated Services Architecture- system to monitor and display the real-time data for all the network links used in the demonstration. MonALISA is a Dynamic, Distributed Service System that is capable of collecting any type of information from different systems, to analyze it in near-real time, and to provide support for automated control decisions and global optimization of workflows in complex grid systems. It is currently used to monitor more than 300 sites, more than 50,000 computing nodes, and tens of thousands of concurrent jobs running on different grid systems and scientific communities.

MonALISA is a highly scalable set of autonomous, self-describing, agent-based subsystems which are able to collaborate and cooperate in performing a wide range of monitoring tasks for networks and Grid systems, as well as the scientific applications themselves. Vanderbilt demonstrated the capabilities of their L-Store middleware, a scalable, open source, and wide-area-capable form of storage virtualization that builds on the Internet Backplane Protocol (IBP) and logistical networking technology developed at the University of Tennessee. Offering both scalable metadata management and software-based fault tolerance, L-Store creates an integrated system that provides a single file-system image across many IBP storage "depots" distributed across wide-area and/or local-area networks. Reading and writing between sets of depots at the Vanderbilt booth at SC06 and Caltech in California, the team achieved a network throughput, disk to disk, of more than one GByte/sec. On the floor, the team was able to sustain throughputs of 3.5 GByte/sec between a rack of client computers and a rack of storage depots. These two racks communicated across SCinet via four 10-GigE connections.

The demonstration and the developments leading up to it were made possible through the strong support of the U.S. Department of Energy Office of Science and the National Science Foundation, in cooperation with the funding agencies of the international partners.

Further information about the demonstration may be found at: and

About Caltech: With an outstanding faculty, including five Nobel laureates, and such off-campus facilities as the Jet Propulsion Laboratory, Palomar Observatory, and the W. M. Keck Observatory, the California Institute of Technology is one of the world's major research centers. The Institute also conducts instruction in science and engineering for a student body of approximately 900 undergraduates and 1,300 graduate students who maintain a high level of scholarship and intellectual achievement. Caltech's 124-acre campus is situated in Pasadena, California, a city of 135,000 at the foot of the San Gabriel Mountains, approximately 30 miles inland from the Pacific Ocean and 10 miles northeast of the Los Angeles Civic Center. Caltech is an independent, privately supported university, and is not affiliated with either the University of California system or the California State Polytechnic universities.

About CACR: Caltech's Center for Advanced Computing Research (CACR) performs research and development on leading-edge networking and computing systems, and methods for computational science and engineering. Some current efforts at CACR include the National Virtual Observatory, ASC Center for Simulation of Dynamic Response of Materials, Computational Infrastructure for Geophysics, Cascade High Productivity Computing System, and the TeraGrid.

About CERN: CERN, the European Organization for Nuclear Research, has its headquarters in Geneva. At present, its member states are Austria, Belgium, Bulgaria, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, Slovakia, Spain, Sweden, Switzerland, and the United Kingdom. Israel, Japan, the Russian Federation, the United States of America, Turkey, the European Commission, and UNESCO have observer status. For more information, see

About Netlab: Caltech's Networking Laboratory, led by Professor Steven Low, develops FAST TCP. The group does research in the control and optimization of protocols and networks, and designs, analyzes, implements, and experiments with new algorithms and systems.

About the University of Michigan: The University of Michigan, with its size, complexity, and academic strength, the breadth of its scholarly resources, and the quality of its faculty and students, is one of America's great public universities and one of the world's premier research institutions. The university was founded in 1817 and has a total enrollment of 54,300 on all campuses. The main campus is in Ann Arbor, Michigan, and has 39,533 students (fall 2004). With over 600 degree programs and $739M in FY05 research funding, the university is one of the leaders in innovation and research. For more information, see

About the University of Florida: The University of Florida (UF), located in Gainesville, is a major public, comprehensive, land-grant, research university. The state's oldest, largest, and most comprehensive university, UF is among the nation's most academically diverse public universities. It has a long history of established programs in international education, research, and service and has a student population of approximately 49,000. UF is the lead institution for the GriPhyN and iVDGL projects and is a Tier-2 facility for the CMS experiment. For more information, see

About StarLight: StarLight is an advanced optical infrastructure and proving ground for network services optimized for high-performance applications. Operational since summer 2001, StarLight is a 1 GE and 10 GE switch/router facility for high-performance access to participating networks and also offers true optical switching for wavelengths. StarLight is being developed by the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago (UIC), the International Center for Advanced Internet Research (iCAIR) at Northwestern University, and the Mathematics and Computer Science Division at Argonne National Laboratory, in partnership with Canada's CANARIE and the Netherlands' SURFnet. STAR TAP and StarLight are made possible by major funding from the U.S. National Science Foundation to UIC. StarLight is a service mark of the Board of Trustees of the University of Illinois. See

About UERJ (Rio de Janeiro): Founded in 1950, the Rio de Janeiro State University (UERJ; http// ranks among the ten largest universities in Brazil, with more than 23,000 students. UERJ's five campuses are home to 22 libraries, 412 classrooms, 50 lecture halls and auditoriums, and 205 laboratories. UERJ is responsible for important public welfare and health projects through its centers of medical excellence, the Pedro Ernesto University Hospital (HUPE) and the Piquet Carneiro Day-care Policlinic Centre, and it is committed to the preservation of the environment. The UERJ High Energy Physics group includes 15 faculty, postdoctoral, and visiting Ph.D. physicists and 12 Ph.D. and master's students, working on experiments at Fermilab (D0) and CERN (CMS). The group has constructed a Tier2 center to enable it to take part in the Grid-based data analysis planned for the LHC, and has originated the concept of a Brazilian "HEP Grid," working in cooperation with USP and several other universities in Rio and São Paulo.

About UNESP (São Paulo): Created in 1976 with the administrative union of several isolated Institutes of higher education in the State of São Paulo, the São Paulo State University, UNESP, has 39 institutes in 23 different cities in the State of São Paulo. The university has 31,000 undergraduate students from 168 different courses and almost 12,000 graduate students. Since 1999 the university has a group participating in the DZero Collaboration of Fermilab, which is operating the São Paulo Regional Analysis Center (SPRACE). This group is now a member of CMS Collaboration of CERN. See

About USP (São Paulo): The University of São Paulo, USP, is the largest institution of higher education and research in Brazil, and the third in size in Latin America. The university has most of its 35 units located on the campus of the capital of the state. It has around 40,000 undergraduate students and around 25,000 graduate students. It is responsible for almost 25 percent of all Brazilian papers and publications indexed on the Institute for Scientific Information (ISI). The SPRACE cluster is located at the Physics Institute. See

About Kyungpook National University (Daegu): Kyungpook National University is one of leading universities in Korea, especially in physics and information science. The university has 13 colleges and 9 graduate schools with 24,000 students. It houses the Center for High Energy Physics (CHEP) in which most Korean high-energy physicists participate. CHEP ( was approved as one of the designated Excellent Research Centers supported by the Korean Ministry of Science.

About GLORIAD: GLORIAD (GLObal RIng network for Advanced application development) is the first round-the-world high-performance ring network jointly established by Korea, the United States, Russia, China, Canada, the Netherlands, and the Nordic countries, with optical networking tools that improve networked collaboration with e-Science and Grid applications. It is currently constructing a dedicated lightwave link connecting the scientific organizations in GLORIAD partners. See

About Vanderbilt: One of America's top 20 universities, Vanderbilt University is a private research university of 6,319 undergraduates and 4,566 graduate and professional students. The university comprises 10 schools, a public policy institute, a distinguished medical center and the Freedom Forum First Amendment Center. Located a mile and a half southwest of downtown Nashville, the campus is a park-like setting. Buildings on the original campus date to its founding in 1873, and the Peabody section of campus has been registered a National Historic Landmark since 1966. Vanderbilt ranks 23rd in the value of federal research grants awarded to faculty members, according to the National Science Foundation.

About CHEPREO: Florida International University (FIU), in collaboration with partners at Florida State University, the University of Florida, and the California Institute of Technology, has been awarded an NSF grant to create and operate an interregional Grid-enabled Center from High-Energy Physics Research and Educational Outreach (CHEPREO; at FIU. CHEPREO encompasses an integrated program of collaborative physics research on CMS, network infrastructure development, and educational outreach at one of the largest minority universities in the U.S. The center is funded by four NSF directorates, including Mathematical and Physical Sciences, Scientific Computing Infrastructure, Elementary, Secondary and Informal Education, and International Programs.

About Internet2®: Led by more than 200 U.S. universities working with industry and government, Internet2 develops and deploys advanced network applications and technologies for research and higher education, accelerating the creation of tomorrow's Internet. Internet2 recreates the partnerships among academia, industry, and government that helped foster today's Internet in its infancy. For more information, visit:

About the Abilene Network: Abilene, developed in partnership with Qwest Communications, Juniper Networks, Nortel Networks, and Indiana University, provides nationwide high-performance networking capabilities for more than 225 universities and research facilities in all 50 states, the District of Columbia, and Puerto Rico. For more information on Abilene please see

About National LambdaRail: National LambdaRail (NLR) is a major initiative of U.S. research universities and private sector technology companies to provide a national scale infrastructure for research and experimentation in networking technologies and applications. NLR puts the control, the power, and the promise of experimental network infrastructure in the hands of the nation's scientists and researchers. Visit for more information.

About the Florida LambdaRail: Florida LambdaRail LLC (FLR) is a Florida limited liability company formed by member higher education institutions to advance optical research and education networking within Florida. Florida LambdaRail is a high-bandwidth optical network that links Florida's research institutions and provides a next-generation network in support of large-scale research, education outreach, public/private partnerships, and information technology infrastructure essential to Florida's economic development. For more information:

About CENIC: CENIC ( is a not-for-profit corporation serving the California Institute of Technology, California State University, Stanford University, University of California, University of Southern California, California Community Colleges, and the statewide K-12 school system. CENIC's mission is to facilitate and coordinate the development, deployment, and operation of a set of robust multitiered advanced network services for this research and education community.

About ESnet: The Energy Sciences Network (ESnet; is a high-speed network serving thousands of Department of Energy scientists and collaborators worldwide. A pioneer in providing high-bandwidth, reliable connections, ESnet enables researchers at national laboratories, universities, and other institutions to communicate with each other using the collaborative capabilities needed to address some of the world's most important scientific challenges. Managed and operated by the ESnet staff at Lawrence Berkeley National Laboratory, ESnet provides direct high-bandwidth connections to all major DOE sites, multiple cross connections with Internet2/Abilene, and connections to Europe via GEANT and to Japan via SuperSINET, as well as fast interconnections to more than 100 other networks. Funded principally by DOE's Office of Science, ESnet services allow scientists to make effective use of unique DOE research facilities and computing resources, independent of time and geographic location.

About AMPATH: Florida International University's Center for Internet Augmented Research and Assessment (CIARA) has developed an international, high-performance research connection point in Miami, Florida, called AMPATH (AMericasPATH; AMPATH's goal is to enable wide-bandwidth digital communications between U.S. and international research and education networks, as well as a variety of U.S. research programs in the region. AMPATH in Miami acts as a major international exchange point (IXP) for the research and education networks in South America, Central America, Mexico, and the Caribbean. The AMPATH IXP is home for the WHREN-LILA high-performance network link connecting Latin America to the U.S., funded by the NSF award #0441095, and the Academic Network of Sao Paulo (award #2003/13708-0).

About the Academic Network of São Paulo (ANSP): ANSP unites São Paulo's University networks with Scientific and Technological Research Centers in São Paulo, and is managed by the State of São Paulo Research Foundation (FAPESP). The ANSP Network is another example of international collaboration and exploration. Through its connection to WHREN-LILA, all of the institutions connected to ANSP will be involved in research with U.S. universities and research centers, offering significant contributions and the potential to develop new applications and services. This connectivity with WHREN-LILA and ANSP will allow researchers to enhance the quality of current data, inevitably increasing the quality of new scientific developments.

About RNP: RNP, the National Education and Research Network of Brazil, is a not-for-profit company that promotes the innovative use of advanced networking, with the joint support of the Ministry of Science and Technology and the Ministry of Education. In the early 1990s, RNP was responsible for the introduction and adoption of Internet technology in Brazil. Today, RNP operates a nationally deployed multi-gigabit network used for collaboration and communication in research and education throughout the country, reaching all 26 states and the Federal District, and provides both commodity and advanced research Internet connectivity to more than 300 universities, research centers, and technical schools.

About AtlanticWave: The AtlanticWave service, officially launched by the Southeastern Universities Research Association (SURA) and a group of collaborating not-for-profit organizations, is a distributed international research network exchange and peering facility along the Atlantic coast of North and South America. The main goal of AtlanticWave is to facilitate research and education (R&E) collaborations between U.S. and Latin American institutions.

AtlanticWave will provide R&E network exchange and peering services for existing networks that interconnect at key exchange points along the Atlantic Coast of North and South America, including MAN LAN in New York City, MAX GigaPOP and NGIX-East in Washington D.C., SoX GigaPOP in Atlanta, AMPATH in Miami, and the São Paulo, Brazil, exchange point operated by the Academic Network of São Paulo (ANSP). AtlanticWave supports the GLIF (Global Lambda Integrated Open Lightpath Exchange (GOLE) model.

About KISTI: KISTI (Korea Institute of Science and Technology Information) is a national institute under the supervision of MOST (Ministry of Science and Technology) of Korea and is playing a leading role in building the nationwide infrastructure for advanced application researches by linking the supercomputing resources with the optical research network, KREONet2. The National Supercomputing Center in KISTI is carrying out national e-Science and Grid projects as well as the GLORIAD-KR project and will become the most important institution based on e-Science and advanced network technologies. See

About Hewlett Packard: HP is a technology solutions provider to consumers, businesses and institutions globally. The company's offerings span IT infrastructure, global services, business and home computing, and imaging and printing. More information about HP (NYSE, Nasdaq: HPQ) is available at

About Neterion, Inc.: Founded in 2001, Neterion Inc. has locations in Cupertino, California and Ottawa, Canada. Neterion delivers 10-Gigabit Ethernet hardware and software solutions that solve customers' high-end networking problems. The Xframe(r) line of products is based on Neterion-developed technologies that deliver new levels of performance, availability, and reliability in the data center. Xframe, Xframe II and Xframe E include full IPv4 and IPv6 support and comprehensive stateless offloads that preserve the integrity of current TCP/IP implementations without "breaking the stack." Xframe drivers are available for all major operating systems, including Microsoft Windows, Linux, Hewlett-Packard's HP-UX, IBM's AIX, Sun's Solaris, and SGI's Irix. Neterion has raised over $42M in funding with its latest C round taking place in June 2004. Formerly known as S2io, the company changed its name to Neterion in January 2005. Further information on the company can be found at

About Myricom: Founded in 1994, Myricom, Inc., created Myrinet, the High-Performance Computing (HPC) interconnect technology used in thousands of computing clusters in more than 50 countries, and in far more systems on the TOP500 Supercomputer list than any other low-latency interconnect. With its new Myri-10G solutions, Myricom achieves a convergence at 10-Gigabit data rates between its low-latency Myrinet technology and mainstream Ethernet. Myri-10G bridges the gap between the rigorous demands of traditional HPC and the growing need for affordable computing speed in enterprise data centers. Myricom solutions are sold direct and through channels. Myri-10G clusters are supplied by OEM computer companies including IBM, HP, Dell, and Sun, and by leading cluster integrators worldwide. Privately held and based in Arcadia, California, Myricom achieved and has sustained profitability since 1995 with eleven consecutive profitable years through 2005.

About the National Science Foundation: The NSF is an independent federal agency created by Congress in 1950 "to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense...." With an annual budget of about $5.5 billion, it is the funding source for approximately 20 percent of all federally supported basic research conducted by America's colleges and universities. In many fields such as mathematics, computer science, and the social sciences, NSF is the major source of federal backing.

About the DOE Office of Science: DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the nation and ensures U.S. world leadership across a broad range of scientific disciplines. The Office of Science also manages 10 world-class national laboratories with unmatched capabilities for solving complex interdisciplinary problems, and it builds and operates some of the nation's most advanced R&D user facilities, located at national laboratories and universities. These facilities are used by more than 19,000 researchers from universities, other government agencies, and private industry each year.

Robert Tindol

Engineers Devise New Method of Chemical Vapor Deposition for Smaller Nanostructures

PASADENA, Calif.—Engineers at the California Institute of Technology have invented an ingenious new method for depositing tiny amounts of materials on surfaces. The researchers say that the technique, known as plasmon-assisted chemical vapor deposition, will add a powerful new tool to the existing battery of techniques used to construct microdevices.

In the current issue of the journal Nano Letters, research scientist David Boyd and his colleagues at Caltech, Stanford University, and New York University report that the new vapor deposition process can be used with a variety of materials by focusing a low-powered laser beam onto a substrate coated with gold nanoparticles. The laser wavelength is chosen to match a natural resonance in the gold particles, and those particles in the small spot illuminated by the laser (about one micron in diameter, or less than a hundredth the diameter of a human hair) absorb energy from the laser and quickly heat up, rising in temperature several hundred degrees.

The gold particles become hot enough to decompose precursor molecules in the gas that strike them, forming microscopic deposits on the nanoparticles. Since this only happens for the hot gold particles in the laser spot, and not for the nearby cool ones outside the laser spot, structures form only where the laser shines, allowing deposition in patterns "drawn" by moving the laser spot on the substrate.

The key to the process is the surprisingly low thermal conductivity at the tiny scales involved, explains Boyd. The nanoparticles absorb energy from the laser very efficiently, but are much worse than bigger particles would be at getting rid of this energy by conducting heat away to the surroundings. As a result, the nanoparticles can be heated to temperatures much higher than expected based on classical concepts of heat conduction.

The process is simple to implement and requires only a small laser, about as powerful as a green laser pointer, says David Goodwin, a professor of mechanical engineering and applied physics at Caltech and a coauthor of the paper. The ability to write micron-scale or smaller structures directly, without need for lithographic patterning and etching, while also keeping the substrate cool outside the small laser spot, opens up new possibilities for the types of structures that may be easily fabricated, explains Goodwin.

To demonstrate the technique, the researchers grew nanowires of lead oxide on a glass substrate that were as small as a few tens of nanometers in diameter. Their results show promise that microdevices can be constructed at even smaller scales in the future.

So far, the team has worked with depositing titanium oxide, lead oxide, and cerium oxide in lab experiments, but says that many other materials should work just as well.

"Anything that can be deposited as a film by conventional means can probably be deposited with this technique," Boyd says.

The other authors of the paper are Leslie Greengard, of New York University's Courant Institute of Mathematical Sciences; Mark Brongersma, of Stanford University's department of materials science and engineering (and a former Caltech postdoc in applied physics); and Mohamed Y. El-Naggar, a former graduate student in mechanical engineering at Caltech, now doing postdoctoral work at USC.

The article is available online at


Robert Tindol

Caltech Professor Receives German Award for Laser Innovations

PASADENA, Calif.-H. Jeff Kimble, Valentine Professor and professor of physics at the California Institute of Technology, has been chosen by the German foundation Berthold Leibinger Stiftung as the initial recipient of its new Berthold Leibinger Zukunftspreis ("Future Prize").

To be awarded every two years, this prize is intended to honor "outstanding milestones in research" related to laser light and carries a prize of 20,000 euros (approximately $25,000). The jury recognized Kimble "for his groundbreaking experiments in the field of cavity quantum electrodynamics," which form "an essential foundation for quantum information technology . . . a key technology of the 21st century."

Kimble's group studies the quantum mechanics of open systems. While his experiments are basic investigations of the nature of the interaction of light and matter, Kimble takes particular interest in transforming fundamental physical processes into scientific tools for advancing quantum information science, with applications ranging from quantum metrology to the processing and distribution of quantum information.

An example is research related to the realization of complex quantum networks, which would be composed of nodes capable of storing and manipulating quantum mechanical states and channels for linking the nodes together in a fully coherent fashion. The network could have nodes consisting of atoms trapped in optical cavities and channels formed by fiber-optic links carrying single-photon states. Such a "quantum internet" was proposed and analyzed by Kimble and his colleagues in 1997.

In 1995 Kimble's group demonstrated a quantum phase gate for two beams of light, which he described as "a quantum transistor with single photons, which had properties suitable for the implementation of quantum logic and perhaps ultimately for the construction of quantum computers."

More recently his research group has built a single-atom laser and observed photon blockade, where a first photon in an atom-cavity system blocks the passage of a second photon.

Kimble chose cavity quantum electrodynamics as one of the few experimentally viable systems in which "the intrinsic quantum mechanical coupling dominates losses due to dissipation."

Berthold Leibinger Stiftung is a private nonprofit foundation. The science and research portion of its mandate focuses on honoring and promoting innovation in laser technology.


Written by: John Avery

Contact: Jill Perry (626) 395-3226

Visit the Caltech Media Relations website at

Exclude from News Hub: 

Caltech Physicist Goes Postal with Four Images of Snowflakes for Commemorative Stamps

PASADENA, Calif.—Anyone looking for a seasonal postage stamp whose beauty just can't be licked should check out Ken Libbrecht's new Holiday Snowflakes stamps.

This month, the U.S. Postal Service is issuing a set of four commemorative stamps featuring images of snowflakes based on photographs taken by Libbrecht, a professor of physics at the California Institute of Technology. Libbrecht will attend a special dedication ceremony for the new stamps to be held at noon Thursday, October 5, at Madison Square Garden in New York, and the stamps will be available for purchase on Friday. Libbrecht is also the author of a new book, Ken Libbrecht's Field Guide to Snowflakes, a 112-page guide for anyone who wants to know more about the many different types of snow crystals and how to find them.

For several years Libbrecht has been investigating the basic physics of how patterns are created during crystal growth and other simple physical processes. He has delved particularly deeply into a case study of the formation of snowflakes. His research is aimed at better understanding how structures arise in material systems, but it is also visually compelling and, from the start, has been a hit with the public.

"My snowflake website,, is getting about two million hits a year," Libbrecht said last December when the snowflake issue was initially announced.

Libbrecht attributes the site's popularity to its discussion of some very accessible science. "Snowflake patterns are well known. The snowflakes fall right out of the sky, and you don't necessarily need a science background to appreciate the science behind how these ice structures form. It's an especially good introduction to science for younger kids," he says.

Libbrecht began his research by growing synthetic snowflakes in his lab, where they can be created and studied under well-controlled conditions. Precision micro-photography was necessary for this work, and over several years Libbrecht developed some specialized techniques for capturing images of snow crystals. Starting in 2001, he expanded his range to photographing natural snowflakes as well. "A few years ago I mounted my microscope in a suitcase, so I now can take it out into the field," says Libbrecht. "Sometimes I arrange trips to visit colleagues in the frozen north, and other times I arrange extended ski vacations with my family. The most difficult part these days is getting this complex-looking instrument through airport security."

Libbrecht's camera rig is essentially a microscope with a camera attached. The entire apparatus was built on campus and designed specifically for snowflake photography. "Snowflakes are made of ice, which is mostly completely clear, so lighting is an important consideration in this whole business," he says. "I use different types of colored lights shining through the crystals, so the ice structures act like complex lenses to refract the light in different ways. The better the lighting, the more interesting is the final photograph." The structures of snowflakes are ephemeral, so speed is needed to get good photographs. Within minutes after falling, a snowflake will begin to degrade as its sharper features evaporate away. The complex structures are created as the crystals grow, and when they stop growing, the crystals soon become rounded and more blocky in appearance. "When photographing in the field, I first let the crystals fall onto a piece of cardboard," says Libbrecht. "Then I find one I like, pick it up using a small paintbrush, and place it on a microscope slide. I then put it under the microscope, adjust the focus and lighting, and take the shot. You need to search through a lot of snowflakes to find the most beautiful specimens." Libbrecht finds that observing natural snowflakes in the field is an important part of his research, and nicely complements his laboratory work. "I've learned a great deal about crystal growth by studying ice, and have gotten many insights from looking at natural crystals. Nature provides a wonderful variety of snow crystal types to look at, and the crystals that fall great distances are larger than what we can easily grow in the lab." So where does one find really nice snowflakes? Certainly not in Pasadena, where Caltech is located, but Libbrecht says that certain snowy places are better than others. The snowflakes chosen for the stamps were photographed in Fairbanks, Alaska, in the Upper Peninsula of Michigan, and in Libbrecht's favorite spot-Cochrane, Northern Ontario. "Northern Ontario provides some really excellent specimens to photograph," says Libbrecht. "The temperature is cold, but not too cold, and the weather brings light snow frequently.

"Fairbanks sometimes offers some unusual crystal types, because it's so cold. Warmer climates, for example, in New York State and the vicinity, tend to produce less spectacular crystals." As for the nitty-gritty of snowflake research, probably the question Libbrecht is asked the most is whether the old story about no two snowflakes being exactly alike is really true.

"The answer is basically yes, because there is such an incredibly large number of possible ways to make a complex snowflake," he says. "In many cases, there are very clear differences between snow crystals, but of course there are many similar crystals as well. In the lab we often produce very simple, hexagonal crystals, and these all look very similar."

Libbrecht can grow many different snowflake forms at will in his lab, but says there are still many subtle mysteries in crystal growth that are of interest to physicists who are trying to understand and control the formation of various materials. A real-world application of research on crystals is the growth of semiconductors for our electronic gadgets. These semiconductors are made possible in part by painstakingly controlling how certain substances condense into solid structures.

Lest anyone thinks that Libbrecht limits his life as a physicist to snowflakes, he is also involved in the Laser Interferometer Gravitational-Wave Observatory (LIGO), an NSF-funded project that seeks to confirm the existence of gravitational waves from exotic cosmic sources such as colliding black holes.

In LIGO, Libbrecht has lots of professional company; in fact, the field was essentially founded by Albert Einstein, who first predicted the existence of gravitational waves as a consequence of general relativity. Kip Thorne and Ron Drever at Caltech, along with Rai Weiss at MIT, were instrumental in initiating the LIGO project in the 1980s.

But in snowflake research, Libbrecht is pretty much a one-man show. And he says there's something about the exclusivity that he likes.

"It suits some of my hermit-like tendencies," comments Libbrecht. "As Daniel Boone once said, if you can smell the smoke of another person's fire, then it's time to move on. My research on snow crystal growth is the one thing I do that simply wouldn't get done otherwise."

Additional information about the 2006 stamps is available at

Robert Tindol
Exclude from News Hub: 

New Window of Universe Opens at Griffith; Unprecedented Image from Palomar

PASADENA, Calif.--Caltech scientists have produced the largest astronomical image ever in order to inspire the public with the wonders of space exploration. The image has been reproduced as a giant mural in the new exhibit hall of the landmark Griffith Observatory, which will reopen Nov. 3 after several years of renovation.

A team led by Caltech Professor of Astronomy George Djorgovski used data from the Palomar-Quest digital sky survey, an ongoing project at the Samuel Oschin Telescope at Palomar Observatory, which is owned and operated by Caltech. The survey is a joint venture between groups at Caltech and Yale University.

The great cosmic panorama, named The Big Picture, is 152 feet long by 20 feet high, and it covers the entire wall of the Richard and Lois Gunther Depths of Space exhibit hall at Griffith Observatory. It is displayed on 114 steel-backed porcelain enamel plates, expected to last many decades, and it will be viewed by millions of visitors annually.

"We wanted to inspire the public and convey the richness of the deep universe and its understanding, and to do it with a real scientific data set," says Djorgovski. "We are doing research with these data, but there is also a sense of beauty and awe, which is important to communicate, especially to young people."

The image covers only a sliver of the visible sky, less than a thousandth of the celestial sphere, roughly an area your index finger would cover if held about a foot away from your eyes. The entire Palomar-Quest sky survey covers an area about 500 times greater.

The part of the sky covered by The Big Picture is in the constellation of Virgo, and it spans the core of the Virgo cluster of galaxies, about 60 million light years away; the light from the brightest galaxies seen in the picture started its journey when dinosaurs ruled the Earth.

"What is perhaps most striking about the image is the wealth of the information in it, and the remarkable diversity of cosmic objects it shows," says Ashish Mahabal, the project scientist for the survey. Aside from the prominent bright galaxies in the Virgo cluster, which dominate the view, the image contains nearly a million much fainter and more distant galaxies; hundreds of thousands of stars in our own galaxy (the Milky Way); a thousand quasars (luminous objects believed to be powered by massive black holes) with distances up to 12 billion light-years away, hundreds of asteroids in our own solar system; and at least one comet.

The data used to construct the image were obtained by the Caltech-Yale team in the course of over 20 nights at the Samuel Oschin Telescope at Palomar in 2004 and 2005. The data were then transferred to Caltech, Yale, and other locations via the High Performance Wireless Research and Education Network. Several hundred gigabytes of raw data were then distilled to produce a 7.4-gigabyte color image, using cutting-edge technology at Caltech's Center for Advanced Computing Research.

"This project illustrates a powerful synergy between modern astronomy and advanced computing, which is increasingly becoming a driving force for both research and education," says Roy Williams, a scientist on the team, and one of the leaders of the U.S. National Virtual Observatory effort. "We plan to use The Big Picture as a magnet and a gateway to learning, not only about the universe, but also about the computing and information technology used to create the mural."

Sky surveys are a large part of the scientific history and legacy of Palomar Observatory starting with the pioneering work of Caltech professor Fritz Zwicky in the 1930s. He used the first such survey to discover numerous supernova explosions, large-scale structures in the universe, and other wonders. A major photographic sky survey conducted in the 1950s at the 48-inch telescope provided the first modern atlas of the sky, guiding many astronomical inquiries. The telescope was later named in honor of Samuel Oschin, the late Los Angeles business leader and philanthropist. Successive surveys at the same telescope, including the current Palomar-Quest project, continue to provide fundamental data sets for astronomy. They have led to numerous important discoveries, ranging from the outer reaches of the solar system to the very distant universe.

In addition to The Big Picture, several exhibits at Griffith have strong connections to Caltech and Palomar, including a model of the Hale 200-inch Telescope, which was a major engineering feat at the time of its construction and has been at the center of many groundbreaking astronomical discoveries for nearly half a century.

A Big Picture education/public outreach website will become active following the Griffith Observatory reopening:

The Caltech team that created The Big Picture includes Djorgovski; staff scientists Mahabal, Williams, Matthew Graham, and Andrew Drake; graduate students Milan Bogosavljevic and Ciro Donalek; digital image experts Leslie Maxfield, Simona Cianciulli, and Radica Bogosaljevic; and several staff members at Palomar Observatory and the Center for Advanced Computing Research. Members of the Yale team who contributed data and observations include Charles Baltay, David Rabinowitz and Nan Elman, graduate student Anne Bauer, and several others. The work was supported mainly by the National Science Foundation. ###

Contacts: S. George Djorgovski, (626) 395-4415, Jill Perry, (626) 395-3226,

Visit the Palomar Observatory website at

Visit the Center for Advanced Computing Research website at


"Champagne Supernova" Challenges Ideas about How Supernovae Work

PASADENA, Calif.- An international team of astronomers at the California Institute of Technology, University of Toronto, and Lawrence Berkeley National Laboratory have discovered a supernova more massive than previously believed possible. This has experts rethinking their basic understanding of how stars explode as supernovae, according to a paper to be published in Nature on September 21.

The lead author of the study, University of Toronto postdoctoral researcher Andy Howell, identified a Type Ia supernova, named SNLS-03D3bb, in a distant galaxy 4 billion light years away that originated from a dense evolved star, termed a "white dwarf," whose mass is far larger than any previous example. Type Ia supernovae are thermonuclear explosions that destroy white dwarfs when they accrete matter from a companion star.

The discovery was made possible through images taken as part of a long-term survey for distant supernovae with the Canada France Hawaii Telescope. Follow-up spectroscopy led by Richard Ellis, Steele Family Professor of Astronomy at Caltech, with the 10-meter Keck Telescope was key to determining the unusually high mass of the new event.

Researchers say the surprisingly high mass of SNLS-03D3bb has opened up a Pandora's box on the current understanding of Type Ia supernovae and, in particular, how well they might be used for future precision tests of the nature of the mysterious "dark energy" responsible for the acceleration of the cosmic expansion.

Current understanding is that Type Ia supernova explosions occur when the mass of a white dwarf approaches 1.4 solar masses, or the "Chandrasekhar limit." This important limit was calculated by Nobel laureate Subramanyan Chandrasekhar in 1930, and is founded on well-established physical laws. Decades of astrophysical research have been based upon the theory. Yet somehow the star that exploded as SNLS-03D3bb reached about two solar masses before exploding.

"It should not be possible to break this limit," says Howell, "but nature has found a way! So now we have to figure out how nature did it."

In a separate "News & Views" article on the research in the same issue of Nature, University of Oklahoma professor David Branch has dubbed this the "Champagne Supernova," since extreme explosions that offer new insight into the inner workings of supernovae are an obvious cause for celebration.

The team speculates that there are at least two possible explanations for how this white dwarf got so fat before it went supernova. One is that the original star was rotating so fast that centrifugal force kept gravity from crushing it at the usual limit. Another is that the blast was in fact the result of two white dwarfs merging, and that the body was only briefly more massive than the Chandrasekhar limit before exploding.

Since Type Ia supernovae usually have about the same brightness, they can be used to map distances in the universe. In 1998 they were used to make the surprising discovery that the expansion of the universe is accelerating. Although the authors are confident that the discovery of a supernova that doesn't follow the rules does not undermine this result, it will make them more cautious about using them to measure distance in the future.

Ellis summarizes: "This is a remarkable discovery that in no way detracts from the beautiful results obtained so far by many teams, which convincingly demonstrate the cosmic acceleration and hence the need for dark energy. However, what it does show is that we have much more to learn about supernovae if we want to use them with the necessary precision in the future. This study is an important step forward in this regard."

Peter Nugent, a staff scientist with the scientific computing group at Lawrence Berkeley National Laboratory, is a co-author of the Nature paper. ###

Contact: Andy Howell, department of astronomy and astrophysics, University of Toronto (416) 946-5432

Richard Ellis, division of physics, mathematics, and astronomy, California Institute of Technology (626) 676-5530

Jill Perry, Caltech Media Relations (626) 395-3226

Visit the Caltech Media Relations website at


Jupiter-Sized Transiting Planet Found by Astronomers Using Novel Telescope Network

PASADENA, Calif.—Our home solar system may be down by a planet with the recent demotion of Pluto, but the number of giant planets discovered in orbit around other stars continues to grow steadily. Now, an international team of astronomers has detected a planet slightly larger than Jupiter that orbits a star 500 light-years from Earth in the constellation Draco.

Unlike the mythological names associated with the solar system's planets, the newly discovered planet is known by "TrES-2" and passes in front of the star "GSC 03549-02811" every two and a half days.

The new planet is especially noteworthy because it was identified by astronomers looking for transiting planets (that is, planets that pass in front of their home star) with a network of small automated telescopes. The humble telescopes used in the discovery consist of mostly amateur-astronomy components and off-the-shelf 4-inch camera lenses. This is the third transiting planet found using telescopes similar to those used by many amateur astronomers.

By definition, a transiting planet passes directly between Earth and the star, causing a slight reduction in the light in a manner similar to that caused by the moon's passing between the sun and Earth during a solar eclipse. According to Francis O'Donovan, an Irish graduate student in astronomy at the California Institute of Technology, "When TrES-2 is in front of the star, it blocks off about one and a half percent of the star's light, an effect we can observe with our TrES telescopes.

"We know of about 200 planets around other stars," says O'Donovan, lead author of the paper announcing the discovery in an upcoming issue of the Astrophysical Journal, "but it is only for the nearby transiting planets that we can precisely measure the size and mass of the planet, and hence study its composition. That makes each new transiting planet an exciting find. And because TrES-2 is the most massive of the nearby transiting planets, it sets a new limit to our understanding of how these gas planets form around stars."

The planet TrES-2 is also noteworthy for being the first transiting planet in an area of the sky known as the "Kepler field," which has been singled out as the targeted field of view for the upcoming NASA Kepler mission. Using a satellite-based telescope, Kepler will stare at this patch of sky for four years, and should discover hundreds of giant planets and Earth-like planets. Finding a planet in the Kepler field with the current method allows astronomers to plan future observations with Kepler that include searching for moons around TrES-2.

And finally, the research team hails the discovery as the second transiting "hot Jupiter" found with the Trans-Atlantic Exoplanet Survey (TrES), an effort involving the "Sleuth" telescope at Caltech's Palomar Observatory in San Diego County, the Planet Search Survey Telescope (PSST) at Lowell Observatory near Flagstaff, Arizona, and the "STellar Astrophysics and Research on Exoplanets (Stare) telescope in the Canary Islands. The name of the planet, TrES-2, is derived from the name of the survey.

To look for transits, the small telescopes are automated to take wide-field timed exposures of the clear skies on as many nights as possible. When an observing run is completed for a particular field-usually over an approximate two-month period-the data are run through software that corrects for various sources of distortion and noise.

The end result is a "light curve" for each of thousands of stars in the field. If the software detects regular variations in the light curve for an individual star, then the astronomers do additional work to see if the source of the variation is indeed a transiting planet. One possible alternative is that the object passing in front of the star is another star, fainter and smaller.

In order to confirm they had found a planet, O'Donovan and his colleagues switched from the 10-centimeter TrES telescopes to one of the 10-meter telescopes at the W. M. Keck Observatory on the summit of Mauna Kea, Hawaii. Using this giant telescope, they confirmed that they had found a new planet. O'Donovan says, "Each of us had spent countless hours working on TrES at that point, and we had suffered many disappointments. All our hard work was made worthwhile when we saw the results from our first night's observations, and realized we had found our second transiting planet."

TrES-2 was first spotted by the Sleuth telescope, which was set up by David Charbonneau, formerly an astronomer at Caltech who is now at the Harvard-Smithsonian Center for Astrophysics and is a coauthor of the paper. The PSST, which is operated by Georgi Mandushev and Edward Dunham (coauthors from Lowell Observatory), also observed transits of TrES-2, confirming the initial detections.

The other authors of the paper are David Latham and Guillermo Torres of Harvard-Smithsonian; Alessandro Sozetti of Harvard-Smithsonian and the INAF-Osservatorio Astronomico di Torino; Timothy Brown of the Las Cumbres Observatory Global Telescope; John Trauger of the Jet Propulsion Laboratory; Markus Rabus, José Almenara, Juan Belmonte, and Hans Deeg of the Instituto de Astrofísica de Canarias; Roi Alonso of the Laboratoire d'Astrophysique de Marseille and the Institute de Astrofísica de Canarias; Gilbert Esquerdo of Harvard-Smithsonian and the Planetary Science Institute in Tucson; Emilio Falco of Harvard-Smithsonian; Lynne Hillenbrand of Caltech; Anna Roussanova of MIT; Robert Stefanik of Harvard-Smithsonian; and Joshua Winn of MIT.

Robert Tindol


Subscribe to RSS - PMA