Friday, July 19, 2013
Cahill, Hameetman Auditorium – Cahill Center for Astronomy and Astrophysics

"Are We Alone?" Public lecture by Dr. Jill Tarter

Voyager: Getting to Know the Magnetic Highway

Since last August, NASA's farthest-flung robotic envoy, Voyager I, has been exploring a distant region just shy of interstellar space, some 11 billion miles (18 billion kilometers) away. Now, in three new papers in Science Express, Voyager scientists, including Caltech's Ed Stone, are sharing what they have learned about the peculiar region known as the "magnetic highway."

In this unexpected region, energetic particles from inside the heliosphere—the "bubble" containing charged particles streaming away from the sun—have escaped and disappeared along a highway of magnetic field lines. Their disappearance has allowed Voyager to see even low-energy cosmic rays that are zipping into the heliosphere from interstellar space.

Stone, the David Morrisroe Professor of Physics at Caltech and the mission's project scientist since 1972, says that what's really interesting is that, although Voyager is still within the realm of the sun's magnetic field, the spacecraft is already getting a taste of what's outside the bubble.

"In this region, Voyager is providing us with a preview of what we're going to see once we reach interstellar space," Stone says. "We're no longer just measuring the high-energy cosmic rays that can get inside the bubble. We're now seeing lower energy cosmic rays as well. We believe we have gotten the first indication of what the total intensity of galactic cosmic rays is in nearby interstellar space."

For more on the findings, read the full NASA release.

Kimm Fesenmaier
Exclude from News Hub: 
News Type: 
Research News

Notes from the Back Row: "Quantum Entanglement and Quantum Computing"

John Preskill, the Richard P. Feynman Professor of Theoretical Physics, is hooked on quanta. He was applying quantum theory to black holes back in 1994 when mathematician Peter Shor (BS '81), then at Bell Labs, showed that a quantum computer could factor a very large number in a very short time. Much of the world's confidential information is protected by codes whose security depends on numerical "keys" large enough to not be factorable in the lifetime of your average evildoer, so, Preskill says, "When I heard about this, I was awestruck." The longest number ever factored by a real computer had 193 digits, and it took "several months for a network of hundreds of workstations collaborating over the Internet," Preskill continues. "If we wanted to factor a 500-digit number instead, it would take longer than the age of the universe." And yet, a quantum computer running at the same processor speed could polish off 193 digits in one-tenth of a second, he says. Factoring a 500-digit number would take all of two seconds.

While an ordinary computer chews through a calculation one bite at a time, a quantum computer arrives at its answer almost instantaneously because it essentially swallows the problem whole. It can do so because quantum information is "entangled," a state of being that is fundamental to the quantum world and completely foreign to ours. In the world we're used to, the two socks in a pair are always the same color. It doesn't matter who looks at them, where they are, or how they're looked at. There's no such independent reality in the quantum world, where the act of opening one of a matched pair of quantum boxes determines the contents of the other one—even if the two boxes are at opposite ends of the universe—but only if the other box is opened in exactly the same way. "Quantum boxes are not like soxes," Preskill says. (If entanglement sounds like a load of hooey to you, you're not alone. Preskill notes that Albert Einstein famously derided it back in the 1930s. "He called it 'spooky action at a distance,' and that sounds even more derisive when you say it in German—'Spukhafte Fernwirkungen!'")

An ordinary computer processes "bits," which are units of information encoded in batches of electrons, patches of magnetic field, or some other physical form. The "qubits" of a quantum computer are encoded by their entanglement, and these entanglements come with a big Do Not Disturb sign. Because the informational content of a quantum "box" is unknown until you open it and look inside, qubits exist only in secret, making them ideal for spies and high finance. However, this impenetrable security is also the quantum computer's downfall. Such a machine would be morbidly sensitive—the slightest encroachment from the outside world would demolish the entanglement and crash the system.

Ordinary computers cope with errors by storing information in triplicate. If one copy of a bit gets corrupted, it will no longer match the other two; error-detecting software constantly checks the three copies against one another and returns the flipped bit to its original state. Fixing flipped bits when you're not allowed to look at them seems an impossible challenge on the face of it, but after reading Shor's paper Preskill decided to give it a shot. Over the next few years, he and his grad student Daniel Gottesman (PhD '97) worked on quantum error correction, eventually arriving at a mathematical procedure by which indirectly measuring the states of five qubits would allow an error in any one of them to be fixed.

This changed the barriers facing practical quantum computation from insurmountable to merely incredibly difficult. The first working quantum computers, built in several labs in the early 2000s, were based on lasers interacting with what Preskill describes as "a handful" of trapped ions to perform "a modest number of [logic] operations." An ion trap is about the size of a thermos bottle, but the laser systems and their associated electronics take up several hundred square feet of lab space. With several million logic gates on a typical computer chip, scaling up this technology is a really big problem. Is there a better way? Perhaps. According Preskill, his colleagues at Caltech's Institute for Quantum Information and Matter are working out the details of a "potentially transformative" approach that would allow quantum computers to be made using the same silicon-based technologies as ordinary ones.

 "Quantum Entanglement and Quantum Computing" is available for download in HD from Caltech on iTunesU. (Episode 19)

Douglas Smith
Exclude from News Hub: 
News Type: 
In Our Community

Birth of a Black Hole

A new kind of cosmic flash may reveal something never seen before: the birth of a black hole.

When a massive star exhausts its fuel, it collapses under its own gravity and produces a black hole, an object so dense that not even light can escape its gravitational grip. According to a new analysis by an astrophysicist at the California Institute of Technology (Caltech), just before the black hole forms, the dying star may generate a distinct burst of light that will allow astronomers to witness the birth of a new black hole for the first time.

Tony Piro, a postdoctoral scholar at Caltech, describes this signature light burst in a paper published in the May 1 issue of the Astrophysical Journal Letters. While some dying stars that result in black holes explode as gamma-ray bursts, which are among the most energetic phenomena in the universe, those cases are rare, requiring exotic circumstances, Piro explains. "We don't think most run-of-the-mill black holes are created that way." In most cases, according to one hypothesis, a dying star produces a black hole without a bang or a flash: the star would seemingly vanish from the sky—an event dubbed an unnova. "You don't see a burst," he says. "You see a disappearance."

But, Piro hypothesizes, that may not be the case. "Maybe they're not as boring as we thought," he says.

According to well-established theory, when a massive star dies, its core collapses under its own weight. As it collapses, the protons and electrons that make up the core merge and produce neutrons. For a few seconds—before it ultimately collapses into a black hole—the core becomes an extremely dense object called a neutron star, which is as dense as the sun would be if squeezed into a sphere with a radius of about 10 kilometers (roughly 6 miles). This collapsing process also creates neutrinos, which are particles that zip through almost all matter at nearly the speed of light. As the neutrinos stream out from the core, they carry away a lot of energy—representing about a tenth of the sun's mass (since energy and mass are equivalent, per E = mc2).

According to a little-known paper written in 1980 by Dmitry Nadezhin of the Alikhanov Institute for Theoretical and Experimental Physics in Russia, this rapid loss of mass means that the gravitational strength of the dying star's core would abruptly drop. When that happens, the outer gaseous layers—mainly hydrogen—still surrounding the core would rush outward, generating a shock wave that would hurtle through the outer layers at about 1,000 kilometers per second (more than 2 million miles per hour).

Using computer simulations, two astronomers at UC Santa Cruz, Elizabeth Lovegrove and Stan Woosley, recently found that when the shock wave strikes the outer surface of the gaseous layers, it would heat the gas at the surface, producing a glow that would shine for about a year—a potentially promising signal of a black-hole birth. Although about a million times brighter than the sun, this glow would be relatively dim compared to other stars. "It would be hard to see, even in galaxies that are relatively close to us," says Piro.

But now Piro says he has found a more promising signal. In his new study, he examines in more detail what might happen at the moment when the shock wave hits the star's surface, and he calculates that the impact itself would make a flash 10 to 100 times brighter than the glow predicted by Lovegrove and Woosley. "That flash is going to be very bright, and it gives us the best chance for actually observing that this event occurred," Piro explains. "This is what you really want to look for."

Such a flash would be dim compared to exploding stars called supernovae, for example, but it would be luminous enough to be detectable in nearby galaxies, he says. The flash, which would shine for 3 to 10 days before fading, would be very bright in optical wavelengths—and at its very brightest in ultraviolet wavelengths.

Piro estimates that astronomers should be able to see one of these events per year on average. Surveys that watch the skies for flashes of light like supernovae—surveys such as the Palomar Transient Factory (PTF), led by Caltech—are well suited to discover these unique events, he says. The intermediate Palomar Transient Factory (iPTF), which improves on the PTF and just began surveying in February, may be able to find a couple of these events per year.

Neither survey has observed any black-hole flashes as of yet, says Piro, but that does not rule out their existence. "Eventually we're going to start getting worried if we don't find these things." But for now, he says, his expectations are perfectly sound.

With Piro's analysis in hand, astronomers should be able to design and fine-tune additional surveys to maximize their chances of witnessing a black-hole birth in the near future. In 2015, the next generation of PTF, called the Zwicky Transient Facility (ZTF), is slated to begin; it will be even more sensitive, improving by several times the chances of finding those flashes. "Caltech is therefore really well-positioned to look for transient events like this," Piro says.

Within the next decade, the Large Synoptic Survey Telescope (LSST) will begin a massive survey of the entire night sky. "If LSST isn't regularly seeing these kinds of events, then that's going to tell us that maybe there's something wrong with this picture, or that black-hole formation is much rarer than we thought," he says.

The Astrophysical Journal Letters paper is titled "Taking the 'un' out of unnovae." This research was supported by the National Science Foundation, NASA, and the Sherman Fairchild Foundation.

Marcus Woo
Exclude from News Hub: 

Astronomers Discover Massive Star Factory in Early Universe

Star-forming galaxy is the most distant ever found

PASADENA, Calif.—Smaller begets bigger.

Such is often the case for galaxies, at least: the first galaxies were small, then eventually merged together to form the behemoths we see in the present universe.

Those smaller galaxies produced stars at a modest rate; only later—when the universe was a couple of billion years old—did the vast majority of larger galaxies begin to form and accumulate enough gas and dust to become prolific star factories. Indeed, astronomers have observed that these star factories—called starburst galaxies—became prevalent a couple of billion years after the Big Bang.

But now a team of astronomers, which includes several from the California Institute of Technology (Caltech), has discovered a dust-filled, massive galaxy churning out stars when the cosmos was a mere 880 million years old—making it the earliest starburst galaxy ever observed.

The galaxy is about as massive as our Milky Way, but produces stars at a rate 2,000 times greater, which is a rate as high as any galaxy in the universe. Generating the mass equivalent of 2,900 suns per year, the galaxy is especially prodigious—prompting the team to call it a "maximum-starburst" galaxy.

"Massive, intense starburst galaxies are expected to only appear at later cosmic times," says Dominik Riechers, who led the research while a senior research fellow at Caltech. "Yet, we have discovered this colossal starburst just 880 million years after the Big Bang, when the universe was at little more than 6 percent of its current age." Now an assistant professor at Cornell, Riechers is the first author of the paper describing the findings in the April 18 issue of the journal Nature.

While the discovery of this single galaxy isn't enough to overturn current theories of galaxy formation, finding more galaxies like this one could challenge those theories, the astronomers say. At the very least, theories will have to be modified to explain how this galaxy, dubbed HFLS3, formed, Riechers says.

"This galaxy is just one spectacular example, but it's telling us that extremely vigorous star formation was possible early in the universe," says Jamie Bock, professor of physics at Caltech and a coauthor of the paper.

The astronomers found HFLS3 chock full of molecules such as carbon monoxide, ammonia, hydroxide, and even water. Because most of the elements in the universe—other than hydrogen and helium—are fused in the nuclear furnaces of stars, such a rich and diverse chemical composition is indicative of active star formation. And indeed, Bock says, the chemical composition of HFLS3 is similar to those of other known starburst galaxies that existed later in cosmic history.

Last month, a Caltech-led team of astronomers—a few of whom are also authors on this newer work—discovered dozens of similar galaxies that were producing stars as early as 1.5 billion years after the Big Bang. But none of them existed as early as HFLS3, which has been studied in much greater detail.

Those previous observations were made possible by gravitational lensing, in which large foreground galaxies act as cosmic magnifying glasses, bending the light of the starburst galaxies and making their detection easier. HFLS3, however, is only weakly lensed, if at all. The fact that it was detectable without the help of lensing means that it is intrinsically a bright galaxy in far-infrared light—nearly 30 trillion times as luminous as the sun and 2,000 times more luminous than the Milky Way.

Because the galaxy is enshrouded in dust, it's very faint in visible light. The galaxy's stars, however, heat up the dust, causing it to radiate in infrared wavelengths. The astronomers were able to find HFLS3 as they sifted through data taken by the European Space Agency's Herschel Space Observatory, which studies the infrared universe. The data was part of the Herschel Multi-tiered Extragalactic Survey (HerMES), an effort co-coordinated by Bock to observe a large patch of the sky (roughly 1,300 times the size of the moon) with Herschel.

Amid the thousands of galaxies detected in the survey, HFLS3 appeared as just a faint dot—but a particularly red one. That caught the attention of Darren Dowell, a visiting associate at Caltech who was analyzing the HerMES data. The object's redness meant that its light was being substantially stretched toward longer (and redder) wavelengths by the expansion of the universe. The more distant an object, the more its light is stretched, and so a very red source would be very far away. The only other possibility would be that—because cooler objects emit light at longer wavelengths—the object might be unusually cold; the astronomers' analysis, however, ruled out that possibility. Because it takes the light billions of years to travel across space, seeing such a distant object is equivalent to looking deep into the past. "We were hoping to find a massive starburst galaxy at vast distances, but we did not expect that one would even exist that early in the universe," Riechers says.

To study HFLS3 further, the astronomers zoomed in with several other telescopes. Using the Combined Array for Research in Millimeter-Wave Astronomy (CARMA)—a series of telescope dishes that Caltech helps operate in the Inyo Mountains of California—as well as the Z-Spec instrument on the Caltech Submillimeter Observatory on Mauna Kea in Hawaii, the team was able to study the chemical composition of the galaxy in detail—in particular, the presence of water and carbon monoxide—and measure its distance. The researchers also used the 10-meter telescope at the W. M. Keck Observatory on Mauna Kea to determine to what extent HFLS3 was gravitationally lensed.

This galaxy is the first such object in the HerMES survey to be analyzed in detail. This type of galaxy is rare, the astronomers say, but to determine just how rare, they will pursue more follow-up studies to see if they can find more of them lurking in the HerMES data. These results also hint at what may soon be discovered with larger infrared observatories, such as the new Atacama Large Millimeter/submillimeter Array (ALMA) in Chile and the planned Cerro Chajnantor Atacama Telescope (CCAT), of which Caltech is a partner institution.

The title of the Nature paper is "A Dust-Obscured Massive Maximum-Starburst Galaxy at a Redshift of 6.34." In addition to Riechers, Bock, and Dowell, the other Caltech authors of the paper are visiting associates in physics Matt Bradford, Asantha Cooray, and Hien Nguyen; postdoctoral scholars Carrie Bridge, Attila Kovacs, Joaquin Vieira, Marco Viero, and Michael Zemcov; staff research scientist Eric Murphy; and Jonas Zmuidzinas, the Merle Kingsley Professor of Physics and the Chief Technologist at NASA's Jet Propulsion Laboratory (JPL). There are a total of 64 authors. Bock, Dowell, and Nguyen helped build the Spectral and Photometric Imaging Receiver (SPIRE) instrument on Herschel.

Herschel is a European Space Agency cornerstone mission, with science instruments provided by consortia of European institutes and with important participation by NASA. NASA's Herschel Project Office is based at JPL in Pasadena, California. JPL contributed mission-enabling technology for two of Herschel's three science instruments. The NASA Herschel Science Center, part of the Infrared Processing and Analysis Center at Caltech in Pasadena, supports the U.S. astronomical community. Caltech manages JPL for NASA.

The W. M. Keck Observatory operates the largest, most scientifically productive telescopes on Earth. The two 10-meter optical/infrared telescopes on the summit of Mauna Kea on the island of Hawaii feature a suite of advanced instruments including imagers, multi-object spectrographs, high-resolution spectrographs, integral-field spectroscopy and a world-leading laser guide-star adaptive optics system. The observatory is operated by a private 501(c)(3) nonprofit organization and is a scientific partnership of the California Institute of Technology, the University of California, and NASA.

Marcus Woo
Exclude from News Hub: 
News Type: 
Research News

The Caltech Space Challenge: Mission to a Martian Moon

The mission: travel to one of Mars's two moons, explore its surface, collect some rocks, and return to Earth in one piece. Now plan it—in five days.

Dozens of students from Caltech and around the world converged on campus during the last week of March to do just that, compete in the Caltech Space Challenge, which pits two teams against each other to design the best manned space mission.

"It was an intense, challenging, and exciting experience," says Melissa Tanner, a fourth-year graduate student in mechanical engineering at Caltech and member of Team Voyager, which faced off against Team Explorer.

The Space Challenge, which was led by aeronautics graduate students Nick Parziale and Jason Rabinovitch, featured lectures and workshops given by expert rocket scientists from Caltech, the Jet Propulsion Laboratory, and the aerospace industry. The two 16-member teams were even treated to an appearance by astronaut Buzz Aldrin, the second human to have walked on the moon. "I think one of the coolest parts of the experience was having world-class mentors," Tanner says.

Most of the week, however, was filled with hard work and little sleep. The students—a mix of undergraduates and graduates—had to plan and consider every aspect of long-term space travel, from choosing a propulsion system to keeping the astronauts healthy and fit. (Both teams emphasized the importance of exercise; Team Voyager proposed mandatory Jazzercise classes.)

Landing on a martian moon is considered a stepping-stone toward the ultimate goal of landing a human on Mars and was one of the recommendations that the U.S. Human Space Flight Plans Committee, an independent review commissioned by the White House, made in 2009. A round-trip to one of Mars's moons is much easier than one to Mars, primarily because Mars's gravity is so much stronger than that of its moons. Landing on any planetary body is a difficult and harrowing task—the Curiosity rover famously went through a landing sequence akin to a Rube Goldberg machine and was dubbed the "seven minutes of terror." And, no previous Mars mission—let alone a manned one—has ever landed on the surface and returned to Earth.

The Martian moons—Phobos and Deimos—are like asteroids, with gravity so weak that a spacecraft can approach one of the moons and grab onto its surface. The gravity on Phobos—the larger of the two, yet with a radius of just 11 kilometers (roughly 7 miles)—is so slight that an object dropped from 1 meter above its surface would take more than 18 seconds to "fall."

The Space Challenge's 32 participants came from 21 universities and 11 countries, and were chosen from 175 applicants. They were selected because they brought a particular skill or expertise to their team. For example, Tanner's research at Caltech is in robotics; in particular, she works on the Axel rover, which is a tethered, two-wheeled robot designed to explore cliffs and other extreme terrain on other planets. Because of that expertise, she helped her team determine how its astronauts would explore the surface of a martian moon.

"I think the most rewarding part was meeting all these other people who know so much," says Jay Qi, a first-year mechanical engineering graduate student at Caltech and a member of Team Explorer. "It was really crazy how much I learned just from talking to people."

Although members from the two teams interacted freely—some of the visiting students even roomed with opposing team members—they were careful not to influence each other's designs so as to maintain the spirit of competition, Qi says.

Still, because both teams were faced with the same problems, they often came up with the same solutions. Both teams ended up with similar mission designs, choosing to go to Phobos, for example, because its bigger size offered potentially more interesting scientific discoveries. And both decided to use a multistage spacecraft assembled in low-Earth orbit that would depart for a six-month trip to Phobos in April 2033. Furthermore, in each plan, the spacecraft would separate into two parts: one that remained in orbit around Phobos and a second—equipped with robotic claws to cling onto the moon's surface—that would transport two astronauts to the moon. After exploring Phobos for about a month, the astronauts would return home.

But there were many differences between the two mission plans. For example, while Team Voyager opted to send three astronauts, the Explorer mission wanted to send four. Team Voyager also included a more detailed robotic precursor mission that would visit both Phobos and Deimos, whereas Team Explorer's manned mission itself included exploring robots.

A group of jurors consisting of experts from Caltech, JPL, and the aerospace industry evaluated the two teams based on their final presentations and reports. The competition was close, said lead juror Joe Parrish, deputy manager of the Mars Program Formulation Office at JPL. "It was never immediately obvious which team was going to prevail," he said at the closing banquet held at the Athenaeum. "Both teams did an unbelievable job." The jury went back and forth but finally decided that Team Voyager presented the better proposal, awarding it a bonus to the stipend that each member received to support his or her trip to Pasadena.

Caltech graduate students Prakhar Mehrotra and Jonathan Mihaly came up with the Caltech Space Challenge; the first challenge, held in 2011, was to plan a manned mission to a near-Earth asteroid. The two faculty advisors for the program are Guillaume Blanquart, assistant professor of mechanical engineering, and Joseph Shepherd, the C. L. Kelly Johnson Professor of Aeronautics and Professor of Mechanical Engineering. The program is organized by the Graduate Aerospace Laboratories of the California Institute of Technology (GALCIT) and supported by Caltech, JPL, the Keck Institute for Space Studies, and corporate and individual sponsors.

Marcus Woo
Frontpage Title: 
Mission to a Martian Moon
Exclude from News Hub: 
News Type: 
In Our Community

Quantum Entanglement and Quantum Computing

Watson Lecture Preview

John Preskill, the Richard P. Feynman Professor of Theoretical Physics, is himself deeply entangled in the quantum world. Different rules apply there, and objects that obey them are now being made in our world, as he explains at 8:00 p.m. on Wednesday, April 3, 2013, in Caltech's Beckman Auditorium. Admission is free.


Q: What do you do?

A: I'm trying to understand what a quantum computer would be capable of, how we could build one, and whether it would really work. My background is in particle theory, a subject I still love, but in the spring of 1994 a mathematician at Bell Labs named Peter Shor [BS 1981] discovered an algorithm for factoring large numbers with a quantum computer. I got really excited by this, because it moved the boundary separating "easy" problems, which we can eventually expect to solve with advanced technologies, from truly hard problems that we may never be able to solve. There are problems we can solve using quantum physics that we couldn't solve otherwise. The crucial problem is protecting a quantum computer from the various kinds of "noise" that could destroy quantum entanglement, and we've made a lot of progress on that.


Q: OK, so what's "entanglement?"

A: It's the correlations between the parts of a system. Suppose you have a 100-page book with print on every page. If you read 10 pages, you'll know 10 percent of the contents. And if you read another 10 pages, you'll learn another 10 percent. But in a highly entangled quantum book, if you read the pages one at a time—or even 10 at a time—you'll learn almost nothing. The information isn't written on the pages. It's stored in the correlations among the pages, so you have to somehow read all of them at once.

There's another important difference: If Alice and Bob both read this morning's New York Times, they will have perfectly correlated information. And if Charlie comes along and reads the same paper later on, he will be just as strongly correlated with Alice as Alice is with Bob, and Bob will be just as correlated with Charlie as he is with Alice. But if Alice reads her quantum newspaper and Bob reads his, they will learn almost nothing until they get together and share their information. Now, when Charlie comes along, Alice and Bob have already used up all their ability to be entangled, and he's completely left out. Entanglement is monogamous—if Alice and Bob are as entangled as they can be, neither of them can entangle with Charlie at all. So if Alice wants to be entangled with both Bob and Charlie, there's a limit to how entangled she can be with either one. They have to work out some sort of compromise.


Q: What gets you excited about this?

A: The technology is emerging to make it possible to do things we've never done before. We were taught in school that classical physics applies to things you can see, and quantum physics applies to the world at the scale of atoms and below. We're rebelling against that by making systems that are big enough to see, yet still exhibit quantum behavior. For example, Professor of Applied Physics Oskar Painter [MS 1995, PhD 2001] has made a tiny silicon bar that's suspended in space, and he's successfully cooled it all the way down to its quantum-mechanical ground state. It vibrates in a mode that corresponds to its lowest quantum state. He hasn't entangled such bars yet, but he knows how to do it.

We're exploring a new frontier of physics. It's not the frontier of short distances, like in particle physics; or of long distances, like in cosmology. It's what you might call the entanglement frontier.


Named for the late Caltech professor Earnest C. Watson, who founded the series in 1922, the Watson Lectures present Caltech and JPL researchers describing their work to the public. Many past Watson Lectures are available online at Caltech's iTunes U site.

Douglas Smith
Listing Title: 
Watson Lecture: "Quantum Entanglement and Quantum Computing"
Exclude from News Hub: 
News Type: 
In Our Community
Monday, April 1, 2013
Center for Student Services, 3rd Floor, Brennan Conference Room – Center for Student Services

Head TA Network Kick-off Meeting & Happy Hour

Fifty Years of Quasars

A Milestone for Astronomy

Radio astronomy was booming in the 1950s. After World War II, a slew of slightly used 7.5-meter-diameter antiaircraft radar antennas (called Würzburg dishes, so you can guess where they came from) were suddenly available for civilian use. Larger, purpose-built dishes also had sprung up in England, Australia, and North America. The so-called Third Cambridge catalog (3C for short), published in 1959 by a consortium of British radio astronomers, listed several hundred bright radio sources in the northern-hemisphere sky.

In those days, a radio telescope recorded a wavy line on a roll of chart paper. In order to figure out what the source was, you had to train an optical telescope on the same point in the sky. But as seen through a really large telescope, such as Caltech's 200-inch Hale Telescope atop Palomar Mountain, the sky is a very crowded place. Picking out the radio source from the myriad of small blots on a photographic plate requires extremely accurate coordinates.

Up at Caltech's Owens Valley Radio Observatory, sandwiched between the Sierra and Inyo National Forests and some five hours' drive north of civilization (or at least Pasadena), Thomas Matthews was using a pair of brand-new, 90-foot-diameter dishes to calculate such coordinates accurately enough for the Hale. He would map these onto photo prints from Palomar's 48-inch survey telescope and pass them on to a young astronomy professor named Maarten Schmidt, who would take the spectrum of that particular blot.

Every chemical element, when sufficiently heated, emits visible light at a few specific, well-known wavelengths. (Imagine a picket fence painted in all the hues of a rainbow from red to blue, but with most of the boards missing.) This spectral "fingerprint" is revealed when the light is separated into its constituent colors by an instrument called a spectrometer. The more distant a galaxy is, the more its light gets stretched, en route to us, by the relentless expansion of the universe. Measuring how much a given line has been shifted toward redder, longer wavelengths tells you how long the light's been traveling and thus how far it has come.

Most of the 3C radio sources were elliptical galaxies—featureless blobs lying at distances up to about three billion light-years away. But every now and then the source would be a starlike pinprick of light. True divas, these "stars" had spectra that could not be deciphered. Worse, they couldn't even be matched to one another. Each one was different; unique.

One such "star" was 3C 273, one of the brightest radio sources in the 3C catalog—and it lies only two degrees north of the celestial equator, which means it's visible from much of Earth's southern hemisphere as well. And this is a very good thing, because in 1962, Australian radio astronomers Cyril Hazard, M. B. Mackey, and Albert Shimmins aimed the 210-foot Parkes radio telescope, some 200 miles west of Sydney, at 3C 273 on three occasions when the moon passed in front of it. Matthews was using interferometry, triangulating the source's position from the very slight difference in the arrival times of its wave peaks at his two radio dishes. By noting the exact instant that 3C 273 winked out or reemerged, the Aussies nailed its position far more accurately. They also discovered that 3C 273 was really two radio sources very close together. They passed their findings on to Matthews, who worked them up and gave them to Schmidt, and when Schmidt went to Palomar in late December 1962, 3C 273 was on his to-do list.

The spectrograph Schmidt was using was mounted in the Hale's prime focus cage—a barrel six feet in diameter built into the telescope's steel skeleton some 50 feet above the mirror. The cage is accessible by an elevator that is moved out of the way during observations; anyone in the cage is marooned for the duration.

Schmidt had to be up there to keep the pinpoint of light he was seeking visually lined up on the spectrograph's slit, using a set of push buttons at his fingertips to control the stately motion of the telescope. "It was romantic!" he recalls fondly. "Once in a while you just had to stop and look around you. At times it could be damned cold, but I had a hot suit"—a pair of electrically heated coveralls also apparently left over from World War II. "It said 'Army Air Forces' on the front, and it did a rather good job of keeping you warm."

Schmidt's workday began in the darkroom, where he prepared his plates for the night. Each one was a thin sheet of glass, about an inch long and a third of an inch wide, cut down from the standard five-by-seven-inch plates used for full-field photographic work, and mounted in a plate holder. The holders, in turn, were packed in a light-tight box about the size of a cigar box for their journey to and from the spectrograph. As sensitive as they were, the plates were only about 2 percent efficient, Schmidt recalls, "which means that 98 percent of the light essentially wasn't used. Therefore the observations were very slow. I spent my night, typically, on one object that I observed for seven or eight hours, and then a brighter object that I might observe for one or two hours. And that was the whole night—two plates. Two objects. Slow work."

3C 273's two radio sources proved to be a relatively bright star and a very faint jet of gaseous material. And by "relatively bright," Schmidt means "dim." Since ancient times, astronomers have ranked stars by their magnitude, with first magnitude being the brightest: Sirius, Arcturus, Vega, Antares, and the like. The faintest stars visible to the naked eye are sixth magnitude. This star was magnitude 13, but it far outshone the radio galaxies he was used to photographing. "The typical objects I worked on were six or seven magnitudes fainter. No wonder I didn't know how long to expose it," he says. His first attempt, on December 27, was so overexposed as to be useless, but he got it right the second time two nights later.

When this plate was developed, it showed four nice, fat lines that, once again, didn't match anything. The mystery remained for about six weeks, until Schmidt was invited by Parkes Observatory director John Bolton (who had overseen the construction of Caltech's Owens Valley facility a few years earlier) to submit a paper to Nature to go with the one that Hazard and company were preparing on 3C 273's position.

And so it was that after lunch on Monday, February 5, 1963, Schmidt was sitting in his office trying once again to make sense of his results. He popped the plate into the viewer, and it suddenly dawned on him that three of his lines, plus one in the infrared that Associate Professor of Astronomy J. Beverley Oke had found using the 100-inch telescope on Mount Wilson, formed a series whose spacing and intensity decreased uniformly from red to blue.

The Balmer series, the best-known set of lines in hydrogen's emission spectrum, does exactly the same thing, but there was one small problem with applying that explanation to 3C 273: the brightest line in the Balmer series, known as H-alpha, is red. As in visible red, not infrared. The wavelength of Oke's line was too far into the infrared by a factor 15.8 percent. However, the brightest, reddest line on Schmidt's plate was in the correct spot for H-beta, the second line in the Balmer series—if its wavelength were also 15.8 percent shorter; H-beta is normally cyan in color. A redshift of 15.8 percent is equivalent to a distance of about three billion light-years in the currently accepted scale of the universe. That's billion. With a 'B.' Our galaxy, the Milky Way, is a mere 120,000 or so light-years in diameter. And Andromeda, our nearest galactic neighbor of any consequence, is only about 2.5 million light-years away. By great good luck, the spectrum of 3C 273 was redshifted by a small enough amount that the Balmer spectrum was still recognizable, but yet redshifted enough to place it really, really far beyond our galaxy.

In his excitement, Schmidt began pacing the hallway, where he buttonholed Professor of Astrophysics Jesse Greenstein. Greenstein had been puzzling over a similar object, 3C 48, whose position had been worked out in 1960 by Matthews and Allan Sandage at the Carnegie Observatories up the street from Caltech. They had found a 16th-magnitude variable blue star, whose spectra, taken by Greenstein and Professor of Astronomy Guido Münch, had the usual assortment of unidentifiable emission lines. Greenstein pulled out the unpublished paper he was working on, and, as Schmidt recalled in his oral history, "in about five or seven minutes, we found a redshift of thirty-seven percent. . . They mutually confirmed each other."

The hubbub attracted Oke, and for the rest of the afternoon the three astronomers tried to come up with an alternative explanation—some weirdly ionized states of relatively rare elements, perhaps—that would allow these "stars" to remain comfortably in our own galaxy. When six o'clock rolled around and no other solution had presented itself, they decided to call it a day. But rather than heading home as usual, "we all trooped with Jesse to his house," Schmidt continued. "And Naomi [Greenstein, Jesse's wife] was immensely surprised, because we all wanted a drink. I came home late that night, and I think I said to my wife, 'Something terrible happened at the office.' It's not necessarily the right expression, but that's what I said."

The "terrible" part was that if 3C 273 really was three billion light-years away, it had to be shining 40 times more brightly than the brightest galaxies. Explaining how this could happen would have to wait until 1969, when a former postdoc of Schmidt's, Donald Lynden-Bell at the University of Cambridge, showed that material swirling around a black hole at a galaxy's center could radiate such staggering amounts of energy before being sucked down the drain.

On March 16, 1963, Nature published four articles back-to-back. The first, by the Parkes people, described the radio observations of 3C 273. The second, by Schmidt, announced the redshift. The third was by Oke about his infrared observations, and the final one, by Greenstein and Matthews, presented the corroborating redshift of 3C 48.

In his article, which was all of two-thirds of a page long, Schmidt noted that it was possible that 3C 273 could be a star in our own galaxy, but that "it would be extremely difficult, if not impossible, to account for" its peculiar spectrum, and that "the explanation in terms of an extragalactic origin seems most direct and least objectionable."

The papers referred to 3C 273 and 3C 48 as "star-like objects," for lack of a better term. Over the next few months, as more of them were discovered and it became abundantly clear that they were not stars, they began to be referred to as "quasi-stellar radio sources," or QSRs. The term "quasar" was coined by Hong-yee Chiu of NASA's Goddard Institute for Space Studies in a May 1964 article for Physics Today.

In 1965 Schmidt published a paper on five quasars, one of which had a redshift of 2.01, placing it halfway across the visible universe. And since distance and age are the same in astronomy, these very, very far-off objects give us an inkling of what the very, very young universe was like. Ever since the '60s, quasar studies have helped us map the universe, figure out why it is as it is today, and even work out how it all began. "The night I discovered the redshift, it was a fantastic prospect," says Schmidt. "We could now easily get to very large redshifts, because these darn things are so bright."

Douglas Smith
Exclude from News Hub: 

Two Decades of Discoveries

Keck Observatory marks 20th anniversary

Although Keith Matthews was about to make history, he went about his tasks like any others. It was the night of March 16, 1993, nearly 14,000 feet above sea level on Mauna Kea in Hawaii, and he had just installed the first instrument on the brand-new 10-meter telescope at W. M. Keck Observatory. Matthews, who built the instrument—a near-infrared camera, abbreviated NIRC—was set to make the first scientific observations using the newly crowned Biggest Telescope in the World.

This Saturday marks the 20th anniversary of those inaugural observations. Speaking at a symposium on March 7 commemorating the anniversary, Tom Soifer, chair of the Division of Physics, Mathematics and Astronomy, called those initial observations "one of the greatest events in astronomy. It's been a remarkable 20 years of exploration and discovery," he said.

At the time of that first observing run, the telescope had yet to be officially commissioned and wasn't yet optimized, but Matthews—now chief instrument scientist at Caltech—was there to see just what the telescope could do. "Fortunately, it worked right off the bat," he recalls.

The observatory was the culmination of more than a decade of planning, designing, and building made possible by unprecedented financial contributions from the Keck Foundation ($70 million for the first telescope) and by cutting-edge technology. But Matthews didn't feel much reason to jump for joy when he saw that first star, sharp and bright on the computer screen. He was too busy to be excited, he says, and those observations were just another set of steps in a long process that had begun more than a decade prior, when he joined the telescope design team in 1979. Caltech would become an official partner of the observatory in 1985, joining the University of California and the University of Hawaii (NASA would join in 1996).

To be sure, Matthews was happy that everything was working relatively smoothly. But, he says, throughout the whole process of making the telescope and the instrument a reality, there was always something else that he needed to focus on and get done. He was observing alone—a rarity these days—operating on four hours of sleep for roughly nine nights. He slept at 9,000 feet and had to make the drive up the summit into the sun's glare every day. The altitude at the summit made the work even more grueling.

And there were still the inevitable bugs and problems. For one, the Dewar—the container that housed the infrared camera and kept it cold—was leaking liquid helium. Matthews tried everything from rubber cement to glycerol to control the leak. The computers also kept crashing, the monitors going blank one by one. "It was funny," he recalls. "All the screens started to go like a house of cards."

He eventually found stopgap measures to control the leak, and the computers were simply rebooted. The observing run demonstrated that even before the telescope was fully optimized, it was already able to achieve better resolution than the 200-inch Hale Telescope at Palomar Observatory, supplanting the 200-inch as the world's most powerful telescope—a title the 200-inch had held since 1948. A second, identical Keck telescope was built in 1996.

In the two decades since, Keck has become arguably the most prominent and productive observatory in astronomy, helping scientists learn how the universe has evolved since the Big Bang, how galaxies form, and how stars are born. The twin telescopes, Keck I and Keck II, have studied dark matter—the mysterious, unseen stuff that makes up most of the universe's mass—as well as dark energy, the cosmic force that's pushing the universe apart. The telescopes have peered into other planetary systems and revealed insights into the origin of our own solar system. In describing how Keck has surpassed expectations, Caltech's Richard Ellis said at last week's symposium, "Unlike politicians, astronomers deliver much more than they predicted."

The symposium highlighted the fact that Keck has proved indispensible, as a powerful telescope in its own right and as an essential complement to other telescopes. "Keck has been fundamental in establishing partnerships with space telescopes," Ellis said. For example, he has used Keck with the Hubble Space Telescope and the Spitzer Space Telescope to probe some of the most distant galaxies ever observed, revealing a poorly understood period of cosmic history roughly a billion years after the Big Bang. With the help of Keck, Fiona Harrison—a Caltech astronomer and principal investigator of the NuSTAR mission, a space telescope that detects high-energy X rays—discovered bright flares emanating from the supermassive black hole at the center of the galaxy. The flares, she said, could be due to asteroids being ripped apart by the black hole.

And even though NASA's Kepler Space Telescope has been revolutionary in identifying thousands of candidate planets, a ground-based telescope like Keck is needed to verify and characterize those worlds. Caltech's John Johnson, for example, has used Keck to characterize what he says are the most typical kind of planetary system in the galaxy. From his analysis, he estimates that there are at least 100 billion planets in the Milky Way. Keck has also allowed Caltech's Mike Brown to measure detailed spectra of Jupiter's moon Europa, finding evidence that suggests its subsurface ocean may bubble up to its frozen surface.

Of course, none of these discoveries would have been possible without Keck's technological advances. Constructing a telescope as large as Keck using a single mirror would be prohibitively expensive and difficult to engineer. Instead, the Keck telescopes each consist of 36 hexagonal mirrors, forming a total aperture of 10 meters. No one had ever attempted a segmented-mirror telescope before Keck. Ellis was at Cambridge University while the telescope was being developed. "We were looking at this plan with total incredulity," he recalled at the symposium. "The idea of a finely segmented telescope was crazy to us, frankly."

One of the difficulties, for example, was in polishing the mirrors. Because a spherical mirror has rotational symmetry, it's relatively easy to polish. But because each of Keck's segments forms just a part of a parabolic curve, each mirror is asymmetrical, making it near impossible to polish. The solution? Force each segment into a spherical shape. Once polished, the mirror is released and pops back into its original, irregular form.

The telescope is fitted with a suite of instruments that have been constantly upgraded and replaced over the last 20 years—and Caltech has played leading roles with many of those instruments, including NIRC and NIRC2 (the second-generation NIRC) and MOSFIRE (co-led by Caltech's Chuck Steidel), a new spectrometer that was just installed last year. Matthews was also the leader on NIRC2, played a significant role in MOSFIRE, and is now leading the effort on a new instrument, a near-infrared spectrometer called NIRES.

As technology improves, telescopes get bigger and more powerful. Keck's eventual replacement, the Thirty Meter Telescope (TMT), in which Caltech is a partner, won't be ready for at least 10 years. In the meantime, Keck will continue to hold its status as the biggest telescope in the world. And, as Caltech's Judith Cohen pointed out in her symposium talk, even after the TMT is built Keck will remain a useful facility—in much the same way that Palomar Observatory remains productive more than 60 years after it was built. In the last two decades, Keck has had a good run in helping astronomers explore the cosmos—but that run is far from over.

Marcus Woo
Exclude from News Hub: 
News Type: 
Research News


Subscribe to RSS - PMA